2025-11-03 11:40:24,511 [INFO] Loaded 2527 existing reports from data/bill_reports.json 2025-11-03 11:40:24,511 [INFO] Starting report generation for 1 bills 2025-11-03 11:40:24,511 [INFO] Skipping bill 1769530 - already processed (1/1) 2025-11-03 11:40:24,550 [INFO] Saved 2527 reports to data/bill_reports.json 2025-11-03 11:40:24,550 [INFO] Report generation complete! 2025-11-03 11:40:24,550 [INFO] Total bills: 1 2025-11-03 11:40:24,550 [INFO] Successfully processed: 0 2025-11-03 11:40:24,550 [INFO] Skipped (already done): 1 2025-11-03 11:40:24,550 [INFO] Errors: 0 2025-11-03 12:24:43,550 [INFO] Loaded 2527 existing reports from data/bill_reports.json 2025-11-03 12:24:43,551 [INFO] Starting report generation for 1 bills 2025-11-03 12:24:43,551 [INFO] Skipping bill 1978757 - already processed (1/1) 2025-11-03 12:24:43,588 [INFO] Saved 2527 reports to data/bill_reports.json 2025-11-03 12:24:43,588 [INFO] Report generation complete! 2025-11-03 12:24:43,588 [INFO] Total bills: 1 2025-11-03 12:24:43,589 [INFO] Successfully processed: 0 2025-11-03 12:24:43,589 [INFO] Skipped (already done): 1 2025-11-03 12:24:43,589 [INFO] Errors: 0 2025-11-04 15:55:14,931 [INFO] Loaded 2564 existing reports from data/bill_reports.json 2025-11-04 15:55:14,932 [INFO] Starting report generation for 10 bills 2025-11-04 15:55:14,932 [INFO] Skipping bill 1978757 - already processed (1/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1980543 - already processed (2/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1893423 - already processed (3/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1964699 - already processed (4/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1978599 - already processed (5/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1980563 - already processed (6/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1976585 - already processed (7/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1904800 - already processed (8/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1974530 - already processed (9/10) 2025-11-04 15:55:14,932 [INFO] Skipping bill 1964676 - already processed (10/10) 2025-11-04 15:55:14,973 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-04 15:55:14,973 [INFO] Report generation complete! 2025-11-04 15:55:14,973 [INFO] Total bills: 10 2025-11-04 15:55:14,973 [INFO] Successfully processed: 0 2025-11-04 15:55:14,973 [INFO] Skipped (already done): 10 2025-11-04 15:55:14,973 [INFO] Errors: 0 2025-11-14 15:31:32,539 [INFO] Loaded 2564 existing reports from data/bill_reports.json 2025-11-14 15:31:32,541 [INFO] Starting report generation for 2564 bills 2025-11-14 15:31:32,541 [INFO] Skipping bill 1769530 - already processed (1/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1765118 - already processed (2/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1745017 - already processed (3/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1745230 - already processed (4/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1847915 - already processed (5/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1847210 - already processed (6/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1847980 - already processed (7/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1840627 - already processed (8/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1840340 - already processed (9/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 2019785 - already processed (10/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1983607 - already processed (11/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 2019702 - already processed (12/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1987220 - already processed (13/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 2022389 - already processed (14/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1959465 - already processed (15/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 2023982 - already processed (16/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 2019732 - already processed (17/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1969654 - already processed (18/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1956622 - already processed (19/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1957166 - already processed (20/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1869518 - already processed (21/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1813560 - already processed (22/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1836190 - already processed (23/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1851112 - already processed (24/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1745943 - already processed (25/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1737840 - already processed (26/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1814309 - already processed (27/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1851143 - already processed (28/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1984991 - already processed (29/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1912439 - already processed (30/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1912476 - already processed (31/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1940708 - already processed (32/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1935103 - already processed (33/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1685926 - already processed (34/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1657717 - already processed (35/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1683096 - already processed (36/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1828964 - already processed (37/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1830782 - already processed (38/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1829010 - already processed (39/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1810349 - already processed (40/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1810356 - already processed (41/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1804209 - already processed (42/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1830673 - already processed (43/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1923768 - already processed (44/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1935042 - already processed (45/2564) 2025-11-14 15:31:32,541 [INFO] Skipping bill 1948089 - already processed (46/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1917064 - already processed (47/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1964274 - already processed (48/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1949161 - already processed (49/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1938396 - already processed (50/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1955446 - already processed (51/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1946736 - already processed (52/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 2037727 - already processed (53/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1730253 - already processed (54/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1721706 - already processed (55/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1975090 - already processed (56/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1946146 - already processed (57/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 2018186 - already processed (58/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 2011735 - already processed (59/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1897622 - already processed (60/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1973543 - already processed (61/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 2009462 - already processed (62/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 2011658 - already processed (63/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1944017 - already processed (64/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1892641 - already processed (65/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 2010078 - already processed (66/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1915632 - already processed (67/2564) 2025-11-14 15:31:32,542 [INFO] Skipping bill 1996393 - already processed (68/2564) 2025-11-14 15:31:32,542 [INFO] Processing 69/2564: Bill ID 1972479 2025-11-14 15:31:34,487 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:34,491 [ERROR] Failed to generate report for bill 1972479: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:35,516 [INFO] Skipping bill 1848589 - already processed (70/2564) 2025-11-14 15:31:35,516 [INFO] Skipping bill 1796695 - already processed (71/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1834299 - already processed (72/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1840453 - already processed (73/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1847401 - already processed (74/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1849339 - already processed (75/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1845122 - already processed (76/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1796692 - already processed (77/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1846289 - already processed (78/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1813231 - already processed (79/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1848433 - already processed (80/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1796691 - already processed (81/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1848536 - already processed (82/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1819737 - already processed (83/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1829037 - already processed (84/2564) 2025-11-14 15:31:35,517 [INFO] Skipping bill 1712200 - already processed (85/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1848424 - already processed (86/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1814918 - already processed (87/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1686429 - already processed (88/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1848359 - already processed (89/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1697069 - already processed (90/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1848453 - already processed (91/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1849513 - already processed (92/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1848521 - already processed (93/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1848425 - already processed (94/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1702816 - already processed (95/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1849367 - already processed (96/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1849520 - already processed (97/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1848530 - already processed (98/2564) 2025-11-14 15:31:35,518 [INFO] Skipping bill 1712027 - already processed (99/2564) 2025-11-14 15:31:35,519 [INFO] Skipping bill 1849659 - already processed (100/2564) 2025-11-14 15:31:35,519 [INFO] Skipping bill 1848478 - already processed (101/2564) 2025-11-14 15:31:35,519 [INFO] Skipping bill 1848387 - already processed (102/2564) 2025-11-14 15:31:35,519 [INFO] Skipping bill 1845137 - already processed (103/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1812205 - already processed (104/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1798416 - already processed (105/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1847351 - already processed (106/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1693943 - already processed (107/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1686454 - already processed (108/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1847404 - already processed (109/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1683775 - already processed (110/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1835452 - already processed (111/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1709727 - already processed (112/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1849724 - already processed (113/2564) 2025-11-14 15:31:35,520 [INFO] Skipping bill 1761500 - already processed (114/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1697048 - already processed (115/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1860070 - already processed (116/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1771300 - already processed (117/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1709708 - already processed (118/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1848529 - already processed (119/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1845179 - already processed (120/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1849404 - already processed (121/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1714444 - already processed (122/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1824468 - already processed (123/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1882346 - already processed (124/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1885654 - already processed (125/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1849359 - already processed (126/2564) 2025-11-14 15:31:35,521 [INFO] Skipping bill 1840414 - already processed (127/2564) 2025-11-14 15:31:35,523 [INFO] Skipping bill 1846229 - already processed (128/2564) 2025-11-14 15:31:35,524 [INFO] Skipping bill 1707510 - already processed (129/2564) 2025-11-14 15:31:35,524 [INFO] Skipping bill 1845188 - already processed (130/2564) 2025-11-14 15:31:35,524 [INFO] Skipping bill 1848524 - already processed (131/2564) 2025-11-14 15:31:35,524 [INFO] Skipping bill 1847496 - already processed (132/2564) 2025-11-14 15:31:35,524 [INFO] Skipping bill 1883008 - already processed (133/2564) 2025-11-14 15:31:35,524 [INFO] Skipping bill 1649620 - already processed (134/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1667841 - already processed (135/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1848476 - already processed (136/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1649670 - already processed (137/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1667891 - already processed (138/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1649612 - already processed (139/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1649615 - already processed (140/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1667833 - already processed (141/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1667836 - already processed (142/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1649618 - already processed (143/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1667839 - already processed (144/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1649630 - already processed (145/2564) 2025-11-14 15:31:35,525 [INFO] Skipping bill 1649619 - already processed (146/2564) 2025-11-14 15:31:35,526 [INFO] Skipping bill 1667851 - already processed (147/2564) 2025-11-14 15:31:35,526 [INFO] Skipping bill 1667840 - already processed (148/2564) 2025-11-14 15:31:35,526 [INFO] Processing 149/2564: Bill ID 1865211 2025-11-14 15:31:36,960 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:36,961 [ERROR] Failed to generate report for bill 1865211: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:37,975 [INFO] Skipping bill 1667837 - already processed (150/2564) 2025-11-14 15:31:37,977 [INFO] Skipping bill 1667892 - already processed (151/2564) 2025-11-14 15:31:37,977 [INFO] Skipping bill 1649616 - already processed (152/2564) 2025-11-14 15:31:37,977 [INFO] Skipping bill 1649671 - already processed (153/2564) 2025-11-14 15:31:37,977 [INFO] Processing 154/2564: Bill ID 1726105 2025-11-14 15:31:39,177 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:39,180 [ERROR] Failed to generate report for bill 1726105: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:40,198 [INFO] Skipping bill 1978757 - already processed (155/2564) 2025-11-14 15:31:40,199 [INFO] Skipping bill 1980543 - already processed (156/2564) 2025-11-14 15:31:40,200 [INFO] Skipping bill 1893423 - already processed (157/2564) 2025-11-14 15:31:40,200 [INFO] Skipping bill 1964699 - already processed (158/2564) 2025-11-14 15:31:40,200 [INFO] Skipping bill 1978599 - already processed (159/2564) 2025-11-14 15:31:40,200 [INFO] Skipping bill 1980563 - already processed (160/2564) 2025-11-14 15:31:40,200 [INFO] Skipping bill 1976585 - already processed (161/2564) 2025-11-14 15:31:40,200 [INFO] Skipping bill 1904800 - already processed (162/2564) 2025-11-14 15:31:40,201 [INFO] Skipping bill 1974530 - already processed (163/2564) 2025-11-14 15:31:40,201 [INFO] Skipping bill 1964676 - already processed (164/2564) 2025-11-14 15:31:40,201 [INFO] Skipping bill 1955758 - already processed (165/2564) 2025-11-14 15:31:40,201 [INFO] Skipping bill 1941749 - already processed (166/2564) 2025-11-14 15:31:40,201 [INFO] Skipping bill 1976440 - already processed (167/2564) 2025-11-14 15:31:40,201 [INFO] Skipping bill 1978812 - already processed (168/2564) 2025-11-14 15:31:40,201 [INFO] Skipping bill 1978731 - already processed (169/2564) 2025-11-14 15:31:40,202 [INFO] Skipping bill 1949687 - already processed (170/2564) 2025-11-14 15:31:40,202 [INFO] Skipping bill 1980302 - already processed (171/2564) 2025-11-14 15:31:40,202 [INFO] Skipping bill 2032041 - already processed (172/2564) 2025-11-14 15:31:40,202 [INFO] Skipping bill 1978672 - already processed (173/2564) 2025-11-14 15:31:40,202 [INFO] Skipping bill 1955756 - already processed (174/2564) 2025-11-14 15:31:40,202 [INFO] Skipping bill 1970455 - already processed (175/2564) 2025-11-14 15:31:40,202 [INFO] Skipping bill 1978694 - already processed (176/2564) 2025-11-14 15:31:40,202 [INFO] Skipping bill 1976550 - already processed (177/2564) 2025-11-14 15:31:40,203 [INFO] Skipping bill 1908207 - already processed (178/2564) 2025-11-14 15:31:40,203 [INFO] Skipping bill 1971712 - already processed (179/2564) 2025-11-14 15:31:40,203 [INFO] Skipping bill 1919273 - already processed (180/2564) 2025-11-14 15:31:40,203 [INFO] Skipping bill 1893452 - already processed (181/2564) 2025-11-14 15:31:40,203 [INFO] Skipping bill 1971760 - already processed (182/2564) 2025-11-14 15:31:40,203 [INFO] Skipping bill 1978553 - already processed (183/2564) 2025-11-14 15:31:40,203 [INFO] Skipping bill 1980501 - already processed (184/2564) 2025-11-14 15:31:40,203 [INFO] Skipping bill 1980139 - already processed (185/2564) 2025-11-14 15:31:40,204 [INFO] Skipping bill 1908210 - already processed (186/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1980228 - already processed (187/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1947445 - already processed (188/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1971753 - already processed (189/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1943407 - already processed (190/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1896630 - already processed (191/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1953097 - already processed (192/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1961095 - already processed (193/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1953091 - already processed (194/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1953081 - already processed (195/2564) 2025-11-14 15:31:40,205 [INFO] Skipping bill 1978871 - already processed (196/2564) 2025-11-14 15:31:40,206 [INFO] Skipping bill 1990396 - already processed (197/2564) 2025-11-14 15:31:40,206 [INFO] Processing 198/2564: Bill ID 1980067 2025-11-14 15:31:41,202 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:41,202 [ERROR] Failed to generate report for bill 1980067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:42,215 [INFO] Skipping bill 1970450 - already processed (199/2564) 2025-11-14 15:31:42,216 [INFO] Skipping bill 1904793 - already processed (200/2564) 2025-11-14 15:31:42,216 [INFO] Skipping bill 1964689 - already processed (201/2564) 2025-11-14 15:31:42,216 [INFO] Skipping bill 1933300 - already processed (202/2564) 2025-11-14 15:31:42,216 [INFO] Skipping bill 2036404 - already processed (203/2564) 2025-11-14 15:31:42,216 [INFO] Skipping bill 1949685 - already processed (204/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 1976474 - already processed (205/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 1898373 - already processed (206/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 2042443 - already processed (207/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 2005483 - already processed (208/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 1968261 - already processed (209/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 1980234 - already processed (210/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 1978559 - already processed (211/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 1974545 - already processed (212/2564) 2025-11-14 15:31:42,217 [INFO] Skipping bill 1908089 - already processed (213/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1939198 - already processed (214/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1939199 - already processed (215/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1908087 - already processed (216/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1908088 - already processed (217/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1939200 - already processed (218/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1939201 - already processed (219/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1908090 - already processed (220/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1939197 - already processed (221/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1908086 - already processed (222/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1651326 - already processed (223/2564) 2025-11-14 15:31:42,218 [INFO] Skipping bill 1747628 - already processed (224/2564) 2025-11-14 15:31:42,219 [INFO] Skipping bill 1871619 - already processed (225/2564) 2025-11-14 15:31:42,219 [INFO] Skipping bill 1874953 - already processed (226/2564) 2025-11-14 15:31:42,219 [INFO] Skipping bill 1831016 - already processed (227/2564) 2025-11-14 15:31:42,219 [INFO] Skipping bill 1846007 - already processed (228/2564) 2025-11-14 15:31:42,219 [INFO] Skipping bill 2026977 - already processed (229/2564) 2025-11-14 15:31:42,219 [INFO] Skipping bill 2042502 - already processed (230/2564) 2025-11-14 15:31:42,219 [INFO] Skipping bill 2042537 - already processed (231/2564) 2025-11-14 15:31:42,219 [INFO] Skipping bill 2042540 - already processed (232/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1907590 - already processed (233/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1907863 - already processed (234/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 2022323 - already processed (235/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1947638 - already processed (236/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1965815 - already processed (237/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 2042471 - already processed (238/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 2017117 - already processed (239/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1973900 - already processed (240/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 2020829 - already processed (241/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1718823 - already processed (242/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1709526 - already processed (243/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1709356 - already processed (244/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1839016 - already processed (245/2564) 2025-11-14 15:31:42,220 [INFO] Skipping bill 1859941 - already processed (246/2564) 2025-11-14 15:31:42,221 [INFO] Skipping bill 1839023 - already processed (247/2564) 2025-11-14 15:31:42,221 [INFO] Skipping bill 1860727 - already processed (248/2564) 2025-11-14 15:31:42,221 [INFO] Processing 249/2564: Bill ID 1876979 2025-11-14 15:31:42,884 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:42,887 [ERROR] Failed to generate report for bill 1876979: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:43,902 [INFO] Skipping bill 1905069 - already processed (250/2564) 2025-11-14 15:31:43,903 [INFO] Skipping bill 1992824 - already processed (251/2564) 2025-11-14 15:31:43,904 [INFO] Skipping bill 1957876 - already processed (252/2564) 2025-11-14 15:31:43,904 [INFO] Skipping bill 1965500 - already processed (253/2564) 2025-11-14 15:31:43,904 [INFO] Skipping bill 1990151 - already processed (254/2564) 2025-11-14 15:31:43,904 [INFO] Skipping bill 1949174 - already processed (255/2564) 2025-11-14 15:31:43,904 [INFO] Skipping bill 1905038 - already processed (256/2564) 2025-11-14 15:31:43,904 [INFO] Skipping bill 1905159 - already processed (257/2564) 2025-11-14 15:31:43,905 [INFO] Skipping bill 1907650 - already processed (258/2564) 2025-11-14 15:31:43,905 [INFO] Skipping bill 1909616 - already processed (259/2564) 2025-11-14 15:31:43,905 [INFO] Skipping bill 1909665 - already processed (260/2564) 2025-11-14 15:31:43,905 [INFO] Skipping bill 1928585 - already processed (261/2564) 2025-11-14 15:31:43,905 [INFO] Skipping bill 1928759 - already processed (262/2564) 2025-11-14 15:31:43,905 [INFO] Skipping bill 1928904 - already processed (263/2564) 2025-11-14 15:31:43,906 [INFO] Skipping bill 1931737 - already processed (264/2564) 2025-11-14 15:31:43,906 [INFO] Skipping bill 1928076 - already processed (265/2564) 2025-11-14 15:31:43,906 [INFO] Skipping bill 1935956 - already processed (266/2564) 2025-11-14 15:31:43,906 [INFO] Skipping bill 1905222 - already processed (267/2564) 2025-11-14 15:31:43,906 [INFO] Skipping bill 1932777 - already processed (268/2564) 2025-11-14 15:31:43,907 [INFO] Skipping bill 1905141 - already processed (269/2564) 2025-11-14 15:31:43,907 [INFO] Processing 270/2564: Bill ID 2034928 2025-11-14 15:31:45,565 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:45,568 [ERROR] Failed to generate report for bill 2034928: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:45,624 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:31:45,624 [INFO] Progress: 270/2564 - Processed: 0, Skipped: 264, Errors: 6 2025-11-14 15:31:46,635 [INFO] Skipping bill 1820947 - already processed (271/2564) 2025-11-14 15:31:46,636 [INFO] Skipping bill 2038143 - already processed (272/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 1946119 - already processed (273/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 2038726 - already processed (274/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 2015494 - already processed (275/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 1754732 - already processed (276/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 1716623 - already processed (277/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 1723029 - already processed (278/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 1749221 - already processed (279/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 1756757 - already processed (280/2564) 2025-11-14 15:31:46,637 [INFO] Skipping bill 1722774 - already processed (281/2564) 2025-11-14 15:31:46,638 [INFO] Processing 282/2564: Bill ID 1746175 2025-11-14 15:31:48,179 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:48,182 [ERROR] Failed to generate report for bill 1746175: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:49,198 [INFO] Skipping bill 1749049 - already processed (283/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1799517 - already processed (284/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1799058 - already processed (285/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1792427 - already processed (286/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1791537 - already processed (287/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1793699 - already processed (288/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1784035 - already processed (289/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1789608 - already processed (290/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1797287 - already processed (291/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1799146 - already processed (292/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1799256 - already processed (293/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1799530 - already processed (294/2564) 2025-11-14 15:31:49,199 [INFO] Skipping bill 1799073 - already processed (295/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1798525 - already processed (296/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1812862 - already processed (297/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1799556 - already processed (298/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1793796 - already processed (299/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1840899 - already processed (300/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1849855 - already processed (301/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1796581 - already processed (302/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1785974 - already processed (303/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1799599 - already processed (304/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1799188 - already processed (305/2564) 2025-11-14 15:31:49,200 [INFO] Skipping bill 1834738 - already processed (306/2564) 2025-11-14 15:31:49,201 [INFO] Skipping bill 1799528 - already processed (307/2564) 2025-11-14 15:31:49,201 [INFO] Processing 308/2564: Bill ID 1829539 2025-11-14 15:31:50,709 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:50,711 [ERROR] Failed to generate report for bill 1829539: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:51,729 [INFO] Skipping bill 1953506 - already processed (309/2564) 2025-11-14 15:31:51,730 [INFO] Skipping bill 1969171 - already processed (310/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1963529 - already processed (311/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1973172 - already processed (312/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1977164 - already processed (313/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1984764 - already processed (314/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1988421 - already processed (315/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1963407 - already processed (316/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1977647 - already processed (317/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1985537 - already processed (318/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1988809 - already processed (319/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1989241 - already processed (320/2564) 2025-11-14 15:31:51,731 [INFO] Skipping bill 1980688 - already processed (321/2564) 2025-11-14 15:31:51,732 [INFO] Skipping bill 1985490 - already processed (322/2564) 2025-11-14 15:31:51,732 [INFO] Skipping bill 1987236 - already processed (323/2564) 2025-11-14 15:31:51,732 [INFO] Skipping bill 2009168 - already processed (324/2564) 2025-11-14 15:31:51,732 [INFO] Skipping bill 1985684 - already processed (325/2564) 2025-11-14 15:31:51,732 [INFO] Skipping bill 1982957 - already processed (326/2564) 2025-11-14 15:31:51,732 [INFO] Skipping bill 2009660 - already processed (327/2564) 2025-11-14 15:31:51,732 [INFO] Skipping bill 1987290 - already processed (328/2564) 2025-11-14 15:31:51,732 [INFO] Skipping bill 2021527 - already processed (329/2564) 2025-11-14 15:31:51,733 [INFO] Skipping bill 1984006 - already processed (330/2564) 2025-11-14 15:31:51,733 [INFO] Skipping bill 1944378 - already processed (331/2564) 2025-11-14 15:31:51,733 [INFO] Processing 332/2564: Bill ID 2016312 2025-11-14 15:31:53,699 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:53,702 [ERROR] Failed to generate report for bill 2016312: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:54,719 [INFO] Skipping bill 1975511 - already processed (333/2564) 2025-11-14 15:31:54,720 [INFO] Skipping bill 1807866 - already processed (334/2564) 2025-11-14 15:31:54,720 [INFO] Skipping bill 1825040 - already processed (335/2564) 2025-11-14 15:31:54,720 [INFO] Skipping bill 1824663 - already processed (336/2564) 2025-11-14 15:31:54,720 [INFO] Skipping bill 1827759 - already processed (337/2564) 2025-11-14 15:31:54,720 [INFO] Skipping bill 1807849 - already processed (338/2564) 2025-11-14 15:31:54,720 [INFO] Skipping bill 1852469 - already processed (339/2564) 2025-11-14 15:31:54,721 [INFO] Skipping bill 1724818 - already processed (340/2564) 2025-11-14 15:31:54,721 [INFO] Skipping bill 1827801 - already processed (341/2564) 2025-11-14 15:31:54,721 [INFO] Skipping bill 1842042 - already processed (342/2564) 2025-11-14 15:31:54,721 [INFO] Skipping bill 1800509 - already processed (343/2564) 2025-11-14 15:31:54,721 [INFO] Skipping bill 1829048 - already processed (344/2564) 2025-11-14 15:31:54,721 [INFO] Skipping bill 1691393 - already processed (345/2564) 2025-11-14 15:31:54,721 [INFO] Skipping bill 1684843 - already processed (346/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1945161 - already processed (347/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1947679 - already processed (348/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1943273 - already processed (349/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1919150 - already processed (350/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 2012228 - already processed (351/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1990355 - already processed (352/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1960995 - already processed (353/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1968119 - already processed (354/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 2006978 - already processed (355/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1974144 - already processed (356/2564) 2025-11-14 15:31:54,722 [INFO] Skipping bill 1974243 - already processed (357/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974425 - already processed (358/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 2016144 - already processed (359/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974177 - already processed (360/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974222 - already processed (361/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974239 - already processed (362/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974292 - already processed (363/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974356 - already processed (364/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974381 - already processed (365/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974418 - already processed (366/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1990318 - already processed (367/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1987837 - already processed (368/2564) 2025-11-14 15:31:54,723 [INFO] Skipping bill 1974421 - already processed (369/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1982057 - already processed (370/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1968164 - already processed (371/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1979990 - already processed (372/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1961023 - already processed (373/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1970366 - already processed (374/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1976266 - already processed (375/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1735435 - already processed (376/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1735103 - already processed (377/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1735239 - already processed (378/2564) 2025-11-14 15:31:54,724 [INFO] Skipping bill 1676639 - already processed (379/2564) 2025-11-14 15:31:54,725 [INFO] Skipping bill 1822936 - already processed (380/2564) 2025-11-14 15:31:54,725 [INFO] Skipping bill 1824099 - already processed (381/2564) 2025-11-14 15:31:54,725 [INFO] Skipping bill 1823066 - already processed (382/2564) 2025-11-14 15:31:54,725 [INFO] Skipping bill 1821100 - already processed (383/2564) 2025-11-14 15:31:54,725 [INFO] Skipping bill 1821376 - already processed (384/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1861884 - already processed (385/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1862091 - already processed (386/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1824408 - already processed (387/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1823094 - already processed (388/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1859976 - already processed (389/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1860020 - already processed (390/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1822457 - already processed (391/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1823240 - already processed (392/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1822425 - already processed (393/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1823305 - already processed (394/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1816605 - already processed (395/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1822519 - already processed (396/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1822760 - already processed (397/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1821542 - already processed (398/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1862395 - already processed (399/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1862180 - already processed (400/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1820992 - already processed (401/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1822908 - already processed (402/2564) 2025-11-14 15:31:54,726 [INFO] Skipping bill 1816124 - already processed (403/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1826161 - already processed (404/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1822451 - already processed (405/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1823328 - already processed (406/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1860844 - already processed (407/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1819671 - already processed (408/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1815658 - already processed (409/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1929168 - already processed (410/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1939103 - already processed (411/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1939150 - already processed (412/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1924410 - already processed (413/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1929804 - already processed (414/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1929561 - already processed (415/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1925992 - already processed (416/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1928926 - already processed (417/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1931961 - already processed (418/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1929636 - already processed (419/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1909994 - already processed (420/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1928408 - already processed (421/2564) 2025-11-14 15:31:54,727 [INFO] Skipping bill 1928598 - already processed (422/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1994243 - already processed (423/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1994303 - already processed (424/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1929659 - already processed (425/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1932766 - already processed (426/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1928570 - already processed (427/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1934608 - already processed (428/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1928364 - already processed (429/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1929760 - already processed (430/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1933272 - already processed (431/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1929496 - already processed (432/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1990347 - already processed (433/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1995251 - already processed (434/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1995449 - already processed (435/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1995259 - already processed (436/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1995271 - already processed (437/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1995747 - already processed (438/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1991557 - already processed (439/2564) 2025-11-14 15:31:54,728 [INFO] Skipping bill 1991563 - already processed (440/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1995783 - already processed (441/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1929457 - already processed (442/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1915997 - already processed (443/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1933178 - already processed (444/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1992758 - already processed (445/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1993026 - already processed (446/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1995569 - already processed (447/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1992805 - already processed (448/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1995900 - already processed (449/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1993019 - already processed (450/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1847870 - already processed (451/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1812600 - already processed (452/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1848008 - already processed (453/2564) 2025-11-14 15:31:54,729 [INFO] Skipping bill 1825516 - already processed (454/2564) 2025-11-14 15:31:54,729 [INFO] Processing 455/2564: Bill ID 1845026 2025-11-14 15:31:55,180 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:55,181 [ERROR] Failed to generate report for bill 1845026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:56,198 [INFO] Skipping bill 1962312 - already processed (456/2564) 2025-11-14 15:31:56,198 [INFO] Skipping bill 1954011 - already processed (457/2564) 2025-11-14 15:31:56,198 [INFO] Skipping bill 1991380 - already processed (458/2564) 2025-11-14 15:31:56,198 [INFO] Processing 459/2564: Bill ID 2011846 2025-11-14 15:31:56,766 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:31:56,768 [ERROR] Failed to generate report for bill 2011846: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:31:57,784 [INFO] Skipping bill 1838778 - already processed (460/2564) 2025-11-14 15:31:57,785 [INFO] Skipping bill 1713666 - already processed (461/2564) 2025-11-14 15:31:57,785 [INFO] Skipping bill 1837146 - already processed (462/2564) 2025-11-14 15:31:57,786 [INFO] Skipping bill 1842401 - already processed (463/2564) 2025-11-14 15:31:57,786 [INFO] Skipping bill 1838992 - already processed (464/2564) 2025-11-14 15:31:57,786 [INFO] Skipping bill 1840748 - already processed (465/2564) 2025-11-14 15:31:57,786 [INFO] Skipping bill 1841780 - already processed (466/2564) 2025-11-14 15:31:57,786 [INFO] Skipping bill 1831504 - already processed (467/2564) 2025-11-14 15:31:57,786 [INFO] Skipping bill 1832905 - already processed (468/2564) 2025-11-14 15:31:57,786 [INFO] Skipping bill 1843072 - already processed (469/2564) 2025-11-14 15:31:57,787 [INFO] Skipping bill 1839869 - already processed (470/2564) 2025-11-14 15:31:57,787 [INFO] Skipping bill 1814012 - already processed (471/2564) 2025-11-14 15:31:57,787 [INFO] Skipping bill 1842520 - already processed (472/2564) 2025-11-14 15:31:57,787 [INFO] Skipping bill 1835262 - already processed (473/2564) 2025-11-14 15:31:57,787 [INFO] Skipping bill 1843020 - already processed (474/2564) 2025-11-14 15:31:57,787 [INFO] Skipping bill 1878243 - already processed (475/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1893072 - already processed (476/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1713755 - already processed (477/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1842316 - already processed (478/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1838852 - already processed (479/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1838748 - already processed (480/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1635340 - already processed (481/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1713127 - already processed (482/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1818470 - already processed (483/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1837189 - already processed (484/2564) 2025-11-14 15:31:57,788 [INFO] Skipping bill 1635556 - already processed (485/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1692465 - already processed (486/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1843326 - already processed (487/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1822203 - already processed (488/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1838434 - already processed (489/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1714042 - already processed (490/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1840824 - already processed (491/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1810043 - already processed (492/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1762665 - already processed (493/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1831619 - already processed (494/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1712988 - already processed (495/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1704077 - already processed (496/2564) 2025-11-14 15:31:57,789 [INFO] Skipping bill 1712903 - already processed (497/2564) 2025-11-14 15:31:57,790 [INFO] Skipping bill 1818714 - already processed (498/2564) 2025-11-14 15:31:57,790 [INFO] Skipping bill 1842743 - already processed (499/2564) 2025-11-14 15:31:57,790 [INFO] Processing 500/2564: Bill ID 1838518 2025-11-14 15:32:00,547 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:00,550 [ERROR] Failed to generate report for bill 1838518: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:00,606 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:32:00,606 [INFO] Progress: 500/2564 - Processed: 0, Skipped: 488, Errors: 12 2025-11-14 15:32:01,615 [INFO] Processing 501/2564: Bill ID 1794181 2025-11-14 15:32:02,373 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:02,374 [ERROR] Failed to generate report for bill 1794181: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:03,386 [INFO] Processing 502/2564: Bill ID 1708593 2025-11-14 15:32:03,924 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:03,926 [ERROR] Failed to generate report for bill 1708593: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:04,940 [INFO] Processing 503/2564: Bill ID 1704148 2025-11-14 15:32:09,193 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:09,196 [ERROR] Failed to generate report for bill 1704148: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:10,212 [INFO] Processing 504/2564: Bill ID 1704278 2025-11-14 15:32:12,433 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:12,436 [ERROR] Failed to generate report for bill 1704278: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:13,454 [INFO] Skipping bill 1714051 - already processed (505/2564) 2025-11-14 15:32:13,455 [INFO] Skipping bill 1951980 - already processed (506/2564) 2025-11-14 15:32:13,455 [INFO] Skipping bill 1942546 - already processed (507/2564) 2025-11-14 15:32:13,455 [INFO] Skipping bill 1954662 - already processed (508/2564) 2025-11-14 15:32:13,456 [INFO] Skipping bill 1962278 - already processed (509/2564) 2025-11-14 15:32:13,456 [INFO] Skipping bill 1959604 - already processed (510/2564) 2025-11-14 15:32:13,456 [INFO] Skipping bill 1961963 - already processed (511/2564) 2025-11-14 15:32:13,456 [INFO] Skipping bill 1906420 - already processed (512/2564) 2025-11-14 15:32:13,456 [INFO] Skipping bill 1959700 - already processed (513/2564) 2025-11-14 15:32:13,456 [INFO] Skipping bill 1960223 - already processed (514/2564) 2025-11-14 15:32:13,457 [INFO] Skipping bill 1955104 - already processed (515/2564) 2025-11-14 15:32:13,457 [INFO] Skipping bill 1962582 - already processed (516/2564) 2025-11-14 15:32:13,457 [INFO] Skipping bill 1945671 - already processed (517/2564) 2025-11-14 15:32:13,457 [INFO] Skipping bill 1927329 - already processed (518/2564) 2025-11-14 15:32:13,457 [INFO] Skipping bill 1950703 - already processed (519/2564) 2025-11-14 15:32:13,457 [INFO] Skipping bill 1962488 - already processed (520/2564) 2025-11-14 15:32:13,457 [INFO] Skipping bill 1945525 - already processed (521/2564) 2025-11-14 15:32:13,457 [INFO] Skipping bill 1958920 - already processed (522/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1962097 - already processed (523/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1963192 - already processed (524/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1947169 - already processed (525/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1961929 - already processed (526/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1962057 - already processed (527/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1973797 - already processed (528/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1963087 - already processed (529/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1940139 - already processed (530/2564) 2025-11-14 15:32:13,458 [INFO] Skipping bill 1941211 - already processed (531/2564) 2025-11-14 15:32:13,459 [INFO] Skipping bill 1906434 - already processed (532/2564) 2025-11-14 15:32:13,459 [INFO] Skipping bill 1963178 - already processed (533/2564) 2025-11-14 15:32:13,459 [INFO] Skipping bill 1954188 - already processed (534/2564) 2025-11-14 15:32:13,459 [INFO] Skipping bill 1954475 - already processed (535/2564) 2025-11-14 15:32:13,459 [INFO] Skipping bill 1957381 - already processed (536/2564) 2025-11-14 15:32:13,459 [INFO] Skipping bill 1962329 - already processed (537/2564) 2025-11-14 15:32:13,459 [INFO] Skipping bill 1962675 - already processed (538/2564) 2025-11-14 15:32:13,459 [INFO] Skipping bill 1935756 - already processed (539/2564) 2025-11-14 15:32:13,460 [INFO] Skipping bill 1945467 - already processed (540/2564) 2025-11-14 15:32:13,460 [INFO] Skipping bill 1907066 - already processed (541/2564) 2025-11-14 15:32:13,460 [INFO] Skipping bill 1985138 - already processed (542/2564) 2025-11-14 15:32:13,460 [INFO] Skipping bill 1961501 - already processed (543/2564) 2025-11-14 15:32:13,460 [INFO] Skipping bill 1962291 - already processed (544/2564) 2025-11-14 15:32:13,460 [INFO] Skipping bill 2034790 - already processed (545/2564) 2025-11-14 15:32:13,460 [INFO] Skipping bill 1962885 - already processed (546/2564) 2025-11-14 15:32:13,460 [INFO] Skipping bill 1960413 - already processed (547/2564) 2025-11-14 15:32:13,461 [INFO] Skipping bill 1959956 - already processed (548/2564) 2025-11-14 15:32:13,461 [INFO] Processing 549/2564: Bill ID 1962986 2025-11-14 15:32:17,054 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:17,056 [ERROR] Failed to generate report for bill 1962986: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:18,070 [INFO] Processing 550/2564: Bill ID 1960510 2025-11-14 15:32:19,511 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:19,514 [ERROR] Failed to generate report for bill 1960510: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:19,573 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:32:19,573 [INFO] Progress: 550/2564 - Processed: 0, Skipped: 532, Errors: 18 2025-11-14 15:32:20,584 [INFO] Skipping bill 1962952 - already processed (551/2564) 2025-11-14 15:32:20,585 [INFO] Processing 552/2564: Bill ID 1645841 2025-11-14 15:32:21,290 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:21,291 [ERROR] Failed to generate report for bill 1645841: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:22,304 [INFO] Skipping bill 1799709 - already processed (553/2564) 2025-11-14 15:32:22,304 [INFO] Skipping bill 1797422 - already processed (554/2564) 2025-11-14 15:32:22,305 [INFO] Skipping bill 1801018 - already processed (555/2564) 2025-11-14 15:32:22,305 [INFO] Skipping bill 1799688 - already processed (556/2564) 2025-11-14 15:32:22,305 [INFO] Skipping bill 1909475 - already processed (557/2564) 2025-11-14 15:32:22,305 [INFO] Skipping bill 1921138 - already processed (558/2564) 2025-11-14 15:32:22,305 [INFO] Skipping bill 1917007 - already processed (559/2564) 2025-11-14 15:32:22,305 [INFO] Skipping bill 1921879 - already processed (560/2564) 2025-11-14 15:32:22,305 [INFO] Skipping bill 1915249 - already processed (561/2564) 2025-11-14 15:32:22,305 [INFO] Skipping bill 1912345 - already processed (562/2564) 2025-11-14 15:32:22,305 [INFO] Processing 563/2564: Bill ID 1897676 2025-11-14 15:32:22,973 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:22,975 [ERROR] Failed to generate report for bill 1897676: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:23,991 [INFO] Skipping bill 1847772 - already processed (564/2564) 2025-11-14 15:32:23,992 [INFO] Skipping bill 1825218 - already processed (565/2564) 2025-11-14 15:32:23,994 [INFO] Skipping bill 1839463 - already processed (566/2564) 2025-11-14 15:32:23,996 [INFO] Skipping bill 1665194 - already processed (567/2564) 2025-11-14 15:32:23,999 [INFO] Skipping bill 1708118 - already processed (568/2564) 2025-11-14 15:32:23,999 [INFO] Skipping bill 1802090 - already processed (569/2564) 2025-11-14 15:32:23,999 [INFO] Skipping bill 1823725 - already processed (570/2564) 2025-11-14 15:32:23,999 [INFO] Skipping bill 1845657 - already processed (571/2564) 2025-11-14 15:32:23,999 [INFO] Skipping bill 1846612 - already processed (572/2564) 2025-11-14 15:32:23,999 [INFO] Skipping bill 1870077 - already processed (573/2564) 2025-11-14 15:32:24,000 [INFO] Skipping bill 1870897 - already processed (574/2564) 2025-11-14 15:32:24,000 [INFO] Skipping bill 1761153 - already processed (575/2564) 2025-11-14 15:32:24,000 [INFO] Skipping bill 1760883 - already processed (576/2564) 2025-11-14 15:32:24,000 [INFO] Skipping bill 1752922 - already processed (577/2564) 2025-11-14 15:32:24,000 [INFO] Skipping bill 1873484 - already processed (578/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 1990915 - already processed (579/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 1969038 - already processed (580/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 1993838 - already processed (581/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 1958795 - already processed (582/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 1977734 - already processed (583/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 1937592 - already processed (584/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 1963811 - already processed (585/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 2029033 - already processed (586/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 2026836 - already processed (587/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 2027180 - already processed (588/2564) 2025-11-14 15:32:24,001 [INFO] Skipping bill 2021349 - already processed (589/2564) 2025-11-14 15:32:24,002 [INFO] Skipping bill 2030059 - already processed (590/2564) 2025-11-14 15:32:24,002 [INFO] Skipping bill 1823829 - already processed (591/2564) 2025-11-14 15:32:24,002 [INFO] Skipping bill 1824037 - already processed (592/2564) 2025-11-14 15:32:24,002 [INFO] Skipping bill 1850989 - already processed (593/2564) 2025-11-14 15:32:24,002 [INFO] Skipping bill 1826921 - already processed (594/2564) 2025-11-14 15:32:24,002 [INFO] Skipping bill 1690087 - already processed (595/2564) 2025-11-14 15:32:24,002 [INFO] Processing 596/2564: Bill ID 1693524 2025-11-14 15:32:24,892 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:24,894 [ERROR] Failed to generate report for bill 1693524: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:25,911 [INFO] Skipping bill 1665637 - already processed (597/2564) 2025-11-14 15:32:25,912 [INFO] Skipping bill 1682635 - already processed (598/2564) 2025-11-14 15:32:25,912 [INFO] Processing 599/2564: Bill ID 1692213 2025-11-14 15:32:26,634 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:26,636 [ERROR] Failed to generate report for bill 1692213: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:27,652 [INFO] Processing 600/2564: Bill ID 1846626 2025-11-14 15:32:28,718 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:28,720 [ERROR] Failed to generate report for bill 1846626: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:28,778 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:32:28,778 [INFO] Progress: 600/2564 - Processed: 0, Skipped: 577, Errors: 23 2025-11-14 15:32:29,779 [INFO] Processing 601/2564: Bill ID 1846675 2025-11-14 15:32:30,626 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:30,628 [ERROR] Failed to generate report for bill 1846675: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:31,645 [INFO] Skipping bill 1653927 - already processed (602/2564) 2025-11-14 15:32:31,646 [INFO] Skipping bill 1959326 - already processed (603/2564) 2025-11-14 15:32:31,646 [INFO] Skipping bill 1948632 - already processed (604/2564) 2025-11-14 15:32:31,646 [INFO] Skipping bill 1955060 - already processed (605/2564) 2025-11-14 15:32:31,646 [INFO] Skipping bill 1946546 - already processed (606/2564) 2025-11-14 15:32:31,646 [INFO] Processing 607/2564: Bill ID 1916487 2025-11-14 15:32:32,562 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:32,564 [ERROR] Failed to generate report for bill 1916487: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:33,580 [INFO] Skipping bill 1949165 - already processed (608/2564) 2025-11-14 15:32:33,581 [INFO] Processing 609/2564: Bill ID 1938020 2025-11-14 15:32:34,353 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:34,356 [ERROR] Failed to generate report for bill 1938020: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:35,372 [INFO] Processing 610/2564: Bill ID 1937464 2025-11-14 15:32:36,101 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:36,104 [ERROR] Failed to generate report for bill 1937464: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:36,162 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:32:36,162 [INFO] Progress: 610/2564 - Processed: 0, Skipped: 583, Errors: 27 2025-11-14 15:32:37,172 [INFO] Processing 611/2564: Bill ID 1713253 2025-11-14 15:32:37,812 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:37,814 [ERROR] Failed to generate report for bill 1713253: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:38,831 [INFO] Skipping bill 1804283 - already processed (612/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1795473 - already processed (613/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1855405 - already processed (614/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1848823 - already processed (615/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1842483 - already processed (616/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1854786 - already processed (617/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1795485 - already processed (618/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1854739 - already processed (619/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1799043 - already processed (620/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1974284 - already processed (621/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1974163 - already processed (622/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1994222 - already processed (623/2564) 2025-11-14 15:32:38,833 [INFO] Skipping bill 1970124 - already processed (624/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1908054 - already processed (625/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1904666 - already processed (626/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1975714 - already processed (627/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1974214 - already processed (628/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1765786 - already processed (629/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1751941 - already processed (630/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1747213 - already processed (631/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1872579 - already processed (632/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1831630 - already processed (633/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1869553 - already processed (634/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1856482 - already processed (635/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1877177 - already processed (636/2564) 2025-11-14 15:32:38,834 [INFO] Skipping bill 1856535 - already processed (637/2564) 2025-11-14 15:32:38,834 [INFO] Processing 638/2564: Bill ID 1856106 2025-11-14 15:32:39,290 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:39,292 [ERROR] Failed to generate report for bill 1856106: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:40,309 [INFO] Skipping bill 2036140 - already processed (639/2564) 2025-11-14 15:32:40,311 [INFO] Skipping bill 2013841 - already processed (640/2564) 2025-11-14 15:32:40,311 [INFO] Skipping bill 2036152 - already processed (641/2564) 2025-11-14 15:32:40,311 [INFO] Skipping bill 2035054 - already processed (642/2564) 2025-11-14 15:32:40,311 [INFO] Skipping bill 2020836 - already processed (643/2564) 2025-11-14 15:32:40,311 [INFO] Skipping bill 2034414 - already processed (644/2564) 2025-11-14 15:32:40,311 [INFO] Skipping bill 2036147 - already processed (645/2564) 2025-11-14 15:32:40,311 [INFO] Skipping bill 2017245 - already processed (646/2564) 2025-11-14 15:32:40,311 [INFO] Processing 647/2564: Bill ID 2020366 2025-11-14 15:32:40,892 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:40,894 [ERROR] Failed to generate report for bill 2020366: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:41,911 [INFO] Skipping bill 1754734 - already processed (648/2564) 2025-11-14 15:32:41,912 [INFO] Skipping bill 1766525 - already processed (649/2564) 2025-11-14 15:32:41,912 [INFO] Skipping bill 1993701 - already processed (650/2564) 2025-11-14 15:32:41,912 [INFO] Skipping bill 2024454 - already processed (651/2564) 2025-11-14 15:32:41,912 [INFO] Skipping bill 1989654 - already processed (652/2564) 2025-11-14 15:32:41,912 [INFO] Skipping bill 1923257 - already processed (653/2564) 2025-11-14 15:32:41,913 [INFO] Skipping bill 2012930 - already processed (654/2564) 2025-11-14 15:32:41,913 [INFO] Skipping bill 2022043 - already processed (655/2564) 2025-11-14 15:32:41,913 [INFO] Skipping bill 1977885 - already processed (656/2564) 2025-11-14 15:32:41,913 [INFO] Skipping bill 1903898 - already processed (657/2564) 2025-11-14 15:32:41,913 [INFO] Skipping bill 2022085 - already processed (658/2564) 2025-11-14 15:32:41,913 [INFO] Skipping bill 2024471 - already processed (659/2564) 2025-11-14 15:32:41,913 [INFO] Skipping bill 1962449 - already processed (660/2564) 2025-11-14 15:32:41,913 [INFO] Skipping bill 1948585 - already processed (661/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 2027763 - already processed (662/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 2038183 - already processed (663/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 2012908 - already processed (664/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 1703457 - already processed (665/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 1703326 - already processed (666/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 1703583 - already processed (667/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 1703488 - already processed (668/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 1694229 - already processed (669/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 1697293 - already processed (670/2564) 2025-11-14 15:32:41,914 [INFO] Skipping bill 1694179 - already processed (671/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1707790 - already processed (672/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1691409 - already processed (673/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1679149 - already processed (674/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1697468 - already processed (675/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1703148 - already processed (676/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1835739 - already processed (677/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1840482 - already processed (678/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1842215 - already processed (679/2564) 2025-11-14 15:32:41,915 [INFO] Skipping bill 1838035 - already processed (680/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1842106 - already processed (681/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1839236 - already processed (682/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1839142 - already processed (683/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1838028 - already processed (684/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1837867 - already processed (685/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1835606 - already processed (686/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1825025 - already processed (687/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1826297 - already processed (688/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1847549 - already processed (689/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1839307 - already processed (690/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1842129 - already processed (691/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1837909 - already processed (692/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1797714 - already processed (693/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1839204 - already processed (694/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1835710 - already processed (695/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1837838 - already processed (696/2564) 2025-11-14 15:32:41,916 [INFO] Skipping bill 1837893 - already processed (697/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1835695 - already processed (698/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1837995 - already processed (699/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1842172 - already processed (700/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1817737 - already processed (701/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1953268 - already processed (702/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1961326 - already processed (703/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1961123 - already processed (704/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1953218 - already processed (705/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1945231 - already processed (706/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1949851 - already processed (707/2564) 2025-11-14 15:32:41,917 [INFO] Skipping bill 1945281 - already processed (708/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1945285 - already processed (709/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1949794 - already processed (710/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1949746 - already processed (711/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1949835 - already processed (712/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1961190 - already processed (713/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1953113 - already processed (714/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1936713 - already processed (715/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1939378 - already processed (716/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1909925 - already processed (717/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1961341 - already processed (718/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1922403 - already processed (719/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1899660 - already processed (720/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1961327 - already processed (721/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1953223 - already processed (722/2564) 2025-11-14 15:32:41,918 [INFO] Skipping bill 1953246 - already processed (723/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1955835 - already processed (724/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1933617 - already processed (725/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1945335 - already processed (726/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1961410 - already processed (727/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1926508 - already processed (728/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1943426 - already processed (729/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1949808 - already processed (730/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1949848 - already processed (731/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1947517 - already processed (732/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1945267 - already processed (733/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1961205 - already processed (734/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1953214 - already processed (735/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1943446 - already processed (736/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1973042 - already processed (737/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1961299 - already processed (738/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1933601 - already processed (739/2564) 2025-11-14 15:32:41,919 [INFO] Skipping bill 1933621 - already processed (740/2564) 2025-11-14 15:32:41,920 [INFO] Processing 741/2564: Bill ID 1919287 2025-11-14 15:32:42,354 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:42,355 [ERROR] Failed to generate report for bill 1919287: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:43,373 [INFO] Skipping bill 1933460 - already processed (742/2564) 2025-11-14 15:32:43,375 [INFO] Skipping bill 1933670 - already processed (743/2564) 2025-11-14 15:32:43,375 [INFO] Skipping bill 1922377 - already processed (744/2564) 2025-11-14 15:32:43,375 [INFO] Skipping bill 1735361 - already processed (745/2564) 2025-11-14 15:32:43,375 [INFO] Skipping bill 1742559 - already processed (746/2564) 2025-11-14 15:32:43,375 [INFO] Skipping bill 1775856 - already processed (747/2564) 2025-11-14 15:32:43,375 [INFO] Skipping bill 1738097 - already processed (748/2564) 2025-11-14 15:32:43,375 [INFO] Skipping bill 1794760 - already processed (749/2564) 2025-11-14 15:32:43,375 [INFO] Skipping bill 1736131 - already processed (750/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1885778 - already processed (751/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1808592 - already processed (752/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1878825 - already processed (753/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1884638 - already processed (754/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1738996 - already processed (755/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1878228 - already processed (756/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1872865 - already processed (757/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1881167 - already processed (758/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1881743 - already processed (759/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1852772 - already processed (760/2564) 2025-11-14 15:32:43,376 [INFO] Skipping bill 1884104 - already processed (761/2564) 2025-11-14 15:32:43,377 [INFO] Skipping bill 1738794 - already processed (762/2564) 2025-11-14 15:32:43,377 [INFO] Skipping bill 1893080 - already processed (763/2564) 2025-11-14 15:32:43,377 [INFO] Skipping bill 1881922 - already processed (764/2564) 2025-11-14 15:32:43,377 [INFO] Skipping bill 1883178 - already processed (765/2564) 2025-11-14 15:32:43,377 [INFO] Skipping bill 1881587 - already processed (766/2564) 2025-11-14 15:32:43,377 [INFO] Skipping bill 1884487 - already processed (767/2564) 2025-11-14 15:32:43,377 [INFO] Skipping bill 1859182 - already processed (768/2564) 2025-11-14 15:32:43,377 [INFO] Skipping bill 1866861 - already processed (769/2564) 2025-11-14 15:32:43,377 [INFO] Processing 770/2564: Bill ID 1891836 2025-11-14 15:32:43,964 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:43,966 [ERROR] Failed to generate report for bill 1891836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:44,021 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:32:44,022 [INFO] Progress: 770/2564 - Processed: 0, Skipped: 738, Errors: 32 2025-11-14 15:32:45,032 [INFO] Skipping bill 1883738 - already processed (771/2564) 2025-11-14 15:32:45,032 [INFO] Skipping bill 1682652 - already processed (772/2564) 2025-11-14 15:32:45,033 [INFO] Skipping bill 1742464 - already processed (773/2564) 2025-11-14 15:32:45,033 [INFO] Skipping bill 1728366 - already processed (774/2564) 2025-11-14 15:32:45,033 [INFO] Skipping bill 1726524 - already processed (775/2564) 2025-11-14 15:32:45,033 [INFO] Skipping bill 1737208 - already processed (776/2564) 2025-11-14 15:32:45,033 [INFO] Skipping bill 1749398 - already processed (777/2564) 2025-11-14 15:32:45,033 [INFO] Skipping bill 1738008 - already processed (778/2564) 2025-11-14 15:32:45,034 [INFO] Skipping bill 1735894 - already processed (779/2564) 2025-11-14 15:32:45,034 [INFO] Skipping bill 1841416 - already processed (780/2564) 2025-11-14 15:32:45,034 [INFO] Skipping bill 1736739 - already processed (781/2564) 2025-11-14 15:32:45,034 [INFO] Skipping bill 1737586 - already processed (782/2564) 2025-11-14 15:32:45,034 [INFO] Skipping bill 1884557 - already processed (783/2564) 2025-11-14 15:32:45,035 [INFO] Processing 784/2564: Bill ID 1875094 2025-11-14 15:32:46,041 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:46,043 [ERROR] Failed to generate report for bill 1875094: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:47,058 [INFO] Processing 785/2564: Bill ID 1755026 2025-11-14 15:32:47,837 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:47,839 [ERROR] Failed to generate report for bill 1755026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:48,856 [INFO] Processing 786/2564: Bill ID 1871591 2025-11-14 15:32:50,208 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:50,211 [ERROR] Failed to generate report for bill 1871591: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:51,225 [INFO] Processing 787/2564: Bill ID 1760451 2025-11-14 15:32:52,361 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:52,363 [ERROR] Failed to generate report for bill 1760451: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:53,380 [INFO] Processing 788/2564: Bill ID 1880948 2025-11-14 15:32:54,507 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:54,509 [ERROR] Failed to generate report for bill 1880948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:55,525 [INFO] Processing 789/2564: Bill ID 1775764 2025-11-14 15:32:56,931 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:56,933 [ERROR] Failed to generate report for bill 1775764: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:57,950 [INFO] Processing 790/2564: Bill ID 1884634 2025-11-14 15:32:59,331 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:32:59,333 [ERROR] Failed to generate report for bill 1884634: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:32:59,386 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:32:59,386 [INFO] Progress: 790/2564 - Processed: 0, Skipped: 751, Errors: 39 2025-11-14 15:33:00,397 [INFO] Skipping bill 2000828 - already processed (791/2564) 2025-11-14 15:33:00,397 [INFO] Skipping bill 2001551 - already processed (792/2564) 2025-11-14 15:33:00,397 [INFO] Skipping bill 1997130 - already processed (793/2564) 2025-11-14 15:33:00,397 [INFO] Skipping bill 2046647 - already processed (794/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 2004206 - already processed (795/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 1998184 - already processed (796/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 2002506 - already processed (797/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 2002695 - already processed (798/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 2047070 - already processed (799/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 2002923 - already processed (800/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 1998946 - already processed (801/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 1997259 - already processed (802/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 2001269 - already processed (803/2564) 2025-11-14 15:33:00,398 [INFO] Skipping bill 2000625 - already processed (804/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 2002705 - already processed (805/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 2046676 - already processed (806/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 2046660 - already processed (807/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 2003933 - already processed (808/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 1997268 - already processed (809/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 2019724 - already processed (810/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 1997990 - already processed (811/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 1998675 - already processed (812/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 2002243 - already processed (813/2564) 2025-11-14 15:33:00,399 [INFO] Skipping bill 1997584 - already processed (814/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 2001175 - already processed (815/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 2002929 - already processed (816/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 1998815 - already processed (817/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 1998575 - already processed (818/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 1999210 - already processed (819/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 2001320 - already processed (820/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 2001993 - already processed (821/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 1999288 - already processed (822/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 1998331 - already processed (823/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 2003746 - already processed (824/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 1927181 - already processed (825/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 2030259 - already processed (826/2564) 2025-11-14 15:33:00,400 [INFO] Skipping bill 1997622 - already processed (827/2564) 2025-11-14 15:33:00,401 [INFO] Processing 828/2564: Bill ID 2028594 2025-11-14 15:33:01,551 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:01,554 [ERROR] Failed to generate report for bill 2028594: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:02,567 [INFO] Processing 829/2564: Bill ID 2038620 2025-11-14 15:33:03,621 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:03,623 [ERROR] Failed to generate report for bill 2038620: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:04,640 [INFO] Processing 830/2564: Bill ID 2024637 2025-11-14 15:33:05,939 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:05,941 [ERROR] Failed to generate report for bill 2024637: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:05,991 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:33:05,991 [INFO] Progress: 830/2564 - Processed: 0, Skipped: 788, Errors: 42 2025-11-14 15:33:07,001 [INFO] Skipping bill 1780182 - already processed (831/2564) 2025-11-14 15:33:07,002 [INFO] Skipping bill 1895692 - already processed (832/2564) 2025-11-14 15:33:07,002 [INFO] Skipping bill 1780190 - already processed (833/2564) 2025-11-14 15:33:07,003 [INFO] Skipping bill 1780196 - already processed (834/2564) 2025-11-14 15:33:07,003 [INFO] Skipping bill 1780166 - already processed (835/2564) 2025-11-14 15:33:07,003 [INFO] Skipping bill 1888099 - already processed (836/2564) 2025-11-14 15:33:07,003 [INFO] Skipping bill 1852983 - already processed (837/2564) 2025-11-14 15:33:07,003 [INFO] Skipping bill 1852813 - already processed (838/2564) 2025-11-14 15:33:07,003 [INFO] Skipping bill 2037995 - already processed (839/2564) 2025-11-14 15:33:07,004 [INFO] Skipping bill 2043787 - already processed (840/2564) 2025-11-14 15:33:07,004 [INFO] Skipping bill 2035241 - already processed (841/2564) 2025-11-14 15:33:07,004 [INFO] Skipping bill 2035278 - already processed (842/2564) 2025-11-14 15:33:07,004 [INFO] Skipping bill 2038014 - already processed (843/2564) 2025-11-14 15:33:07,005 [INFO] Skipping bill 2009885 - already processed (844/2564) 2025-11-14 15:33:07,006 [INFO] Skipping bill 2035768 - already processed (845/2564) 2025-11-14 15:33:07,006 [INFO] Skipping bill 2025453 - already processed (846/2564) 2025-11-14 15:33:07,006 [INFO] Skipping bill 2038856 - already processed (847/2564) 2025-11-14 15:33:07,006 [INFO] Skipping bill 2009892 - already processed (848/2564) 2025-11-14 15:33:07,006 [INFO] Skipping bill 1861260 - already processed (849/2564) 2025-11-14 15:33:07,006 [INFO] Skipping bill 1856334 - already processed (850/2564) 2025-11-14 15:33:07,006 [INFO] Skipping bill 1856821 - already processed (851/2564) 2025-11-14 15:33:07,007 [INFO] Skipping bill 1864646 - already processed (852/2564) 2025-11-14 15:33:07,007 [INFO] Skipping bill 1860647 - already processed (853/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1707979 - already processed (854/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1643078 - already processed (855/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1651590 - already processed (856/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1852405 - already processed (857/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1852812 - already processed (858/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1858711 - already processed (859/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1853103 - already processed (860/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1851979 - already processed (861/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1859186 - already processed (862/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1740589 - already processed (863/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1741802 - already processed (864/2564) 2025-11-14 15:33:07,010 [INFO] Skipping bill 1860410 - already processed (865/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1957720 - already processed (866/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1974786 - already processed (867/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1989670 - already processed (868/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1979597 - already processed (869/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1984757 - already processed (870/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 2009204 - already processed (871/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 2015254 - already processed (872/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1974962 - already processed (873/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 2009276 - already processed (874/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1989103 - already processed (875/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1984950 - already processed (876/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1975975 - already processed (877/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 2004610 - already processed (878/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 2004938 - already processed (879/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1992603 - already processed (880/2564) 2025-11-14 15:33:07,011 [INFO] Skipping bill 1992640 - already processed (881/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1996293 - already processed (882/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 2011831 - already processed (883/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 2012661 - already processed (884/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1950967 - already processed (885/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1994787 - already processed (886/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 2011159 - already processed (887/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 2006411 - already processed (888/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 2011256 - already processed (889/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 2004789 - already processed (890/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1981280 - already processed (891/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 2009071 - already processed (892/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1967748 - already processed (893/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1707150 - already processed (894/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1669781 - already processed (895/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1643012 - already processed (896/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1848903 - already processed (897/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1848260 - already processed (898/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1820844 - already processed (899/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1851922 - already processed (900/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1850740 - already processed (901/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1838535 - already processed (902/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1851828 - already processed (903/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1863177 - already processed (904/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1852015 - already processed (905/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1818886 - already processed (906/2564) 2025-11-14 15:33:07,012 [INFO] Skipping bill 1852513 - already processed (907/2564) 2025-11-14 15:33:07,012 [INFO] Processing 908/2564: Bill ID 1851836 2025-11-14 15:33:08,004 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:08,007 [ERROR] Failed to generate report for bill 1851836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:09,025 [INFO] Skipping bill 1933975 - already processed (909/2564) 2025-11-14 15:33:09,026 [INFO] Skipping bill 1935092 - already processed (910/2564) 2025-11-14 15:33:09,026 [INFO] Skipping bill 1937681 - already processed (911/2564) 2025-11-14 15:33:09,026 [INFO] Skipping bill 1927333 - already processed (912/2564) 2025-11-14 15:33:09,026 [INFO] Skipping bill 1936069 - already processed (913/2564) 2025-11-14 15:33:09,027 [INFO] Skipping bill 1940299 - already processed (914/2564) 2025-11-14 15:33:09,027 [INFO] Skipping bill 1911677 - already processed (915/2564) 2025-11-14 15:33:09,027 [INFO] Skipping bill 1929973 - already processed (916/2564) 2025-11-14 15:33:09,027 [INFO] Skipping bill 1910359 - already processed (917/2564) 2025-11-14 15:33:09,027 [INFO] Skipping bill 1934687 - already processed (918/2564) 2025-11-14 15:33:09,027 [INFO] Skipping bill 1930038 - already processed (919/2564) 2025-11-14 15:33:09,027 [INFO] Skipping bill 1925325 - already processed (920/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 1933890 - already processed (921/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 1934898 - already processed (922/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 2034194 - already processed (923/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 1972440 - already processed (924/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 1934020 - already processed (925/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 1912210 - already processed (926/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 1634819 - already processed (927/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 1634779 - already processed (928/2564) 2025-11-14 15:33:09,028 [INFO] Skipping bill 1836873 - already processed (929/2564) 2025-11-14 15:33:09,029 [INFO] Skipping bill 1834678 - already processed (930/2564) 2025-11-14 15:33:09,029 [INFO] Skipping bill 1790707 - already processed (931/2564) 2025-11-14 15:33:09,029 [INFO] Skipping bill 1852775 - already processed (932/2564) 2025-11-14 15:33:09,029 [INFO] Skipping bill 1897040 - already processed (933/2564) 2025-11-14 15:33:09,029 [INFO] Skipping bill 1898466 - already processed (934/2564) 2025-11-14 15:33:09,029 [INFO] Skipping bill 1893847 - already processed (935/2564) 2025-11-14 15:33:09,029 [INFO] Skipping bill 1983834 - already processed (936/2564) 2025-11-14 15:33:09,029 [INFO] Skipping bill 1988287 - already processed (937/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1894415 - already processed (938/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1917533 - already processed (939/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1900966 - already processed (940/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1972401 - already processed (941/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1988699 - already processed (942/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1988844 - already processed (943/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1894126 - already processed (944/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1974757 - already processed (945/2564) 2025-11-14 15:33:09,030 [INFO] Skipping bill 1717719 - already processed (946/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1912107 - already processed (947/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1941091 - already processed (948/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1916250 - already processed (949/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1974033 - already processed (950/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1895954 - already processed (951/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1974042 - already processed (952/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1981849 - already processed (953/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1979780 - already processed (954/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1896111 - already processed (955/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1971592 - already processed (956/2564) 2025-11-14 15:33:09,031 [INFO] Skipping bill 1971640 - already processed (957/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1896588 - already processed (958/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1981663 - already processed (959/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1867796 - already processed (960/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1867828 - already processed (961/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1813907 - already processed (962/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1814493 - already processed (963/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1867439 - already processed (964/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1814241 - already processed (965/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1935238 - already processed (966/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1908945 - already processed (967/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1980982 - already processed (968/2564) 2025-11-14 15:33:09,032 [INFO] Skipping bill 1934094 - already processed (969/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1931194 - already processed (970/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1915534 - already processed (971/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1927914 - already processed (972/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1710815 - already processed (973/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1748189 - already processed (974/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1746365 - already processed (975/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1965229 - already processed (976/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1999738 - already processed (977/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1989648 - already processed (978/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1946188 - already processed (979/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1892638 - already processed (980/2564) 2025-11-14 15:33:09,033 [INFO] Skipping bill 1944647 - already processed (981/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1983017 - already processed (982/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1954626 - already processed (983/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1977147 - already processed (984/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 2013424 - already processed (985/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 2013451 - already processed (986/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1953001 - already processed (987/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1982880 - already processed (988/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1989793 - already processed (989/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1954479 - already processed (990/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 2031601 - already processed (991/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 2009433 - already processed (992/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1901514 - already processed (993/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1651925 - already processed (994/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1793373 - already processed (995/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1793039 - already processed (996/2564) 2025-11-14 15:33:09,034 [INFO] Skipping bill 1792971 - already processed (997/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1793409 - already processed (998/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1793958 - already processed (999/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1793284 - already processed (1000/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1938552 - already processed (1001/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1922870 - already processed (1002/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1803710 - already processed (1003/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1889722 - already processed (1004/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1892083 - already processed (1005/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1889346 - already processed (1006/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1889719 - already processed (1007/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1889335 - already processed (1008/2564) 2025-11-14 15:33:09,035 [INFO] Skipping bill 1897572 - already processed (1009/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1887538 - already processed (1010/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1887101 - already processed (1011/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1888624 - already processed (1012/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1877673 - already processed (1013/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1897803 - already processed (1014/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1889758 - already processed (1015/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1897565 - already processed (1016/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1853521 - already processed (1017/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1864839 - already processed (1018/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1879513 - already processed (1019/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1878078 - already processed (1020/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 2013662 - already processed (1021/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1897603 - already processed (1022/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1881186 - already processed (1023/2564) 2025-11-14 15:33:09,036 [INFO] Skipping bill 1983797 - already processed (1024/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 2023789 - already processed (1025/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1878049 - already processed (1026/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1807241 - already processed (1027/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1881870 - already processed (1028/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1881843 - already processed (1029/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 2030230 - already processed (1030/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 2022901 - already processed (1031/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1896879 - already processed (1032/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1889701 - already processed (1033/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1970250 - already processed (1034/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 2037153 - already processed (1035/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 2013635 - already processed (1036/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1883140 - already processed (1037/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1853367 - already processed (1038/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1801284 - already processed (1039/2564) 2025-11-14 15:33:09,037 [INFO] Skipping bill 1889518 - already processed (1040/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1888073 - already processed (1041/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1889754 - already processed (1042/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1835303 - already processed (1043/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1949479 - already processed (1044/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 2022816 - already processed (1045/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1872559 - already processed (1046/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1875857 - already processed (1047/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1876467 - already processed (1048/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1876586 - already processed (1049/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 2038328 - already processed (1050/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1878887 - already processed (1051/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1853095 - already processed (1052/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1805407 - already processed (1053/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 2022907 - already processed (1054/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1949574 - already processed (1055/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1844841 - already processed (1056/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1864295 - already processed (1057/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1881176 - already processed (1058/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1837365 - already processed (1059/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1837180 - already processed (1060/2564) 2025-11-14 15:33:09,038 [INFO] Skipping bill 1887099 - already processed (1061/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 2028679 - already processed (1062/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 2030354 - already processed (1063/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 2008967 - already processed (1064/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1964010 - already processed (1065/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1882474 - already processed (1066/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1881178 - already processed (1067/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 2047520 - already processed (1068/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 2037324 - already processed (1069/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1806224 - already processed (1070/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1837135 - already processed (1071/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1805930 - already processed (1072/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1803406 - already processed (1073/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1883773 - already processed (1074/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1994137 - already processed (1075/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1881306 - already processed (1076/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1889726 - already processed (1077/2564) 2025-11-14 15:33:09,039 [INFO] Skipping bill 1889593 - already processed (1078/2564) 2025-11-14 15:33:09,039 [INFO] Processing 1079/2564: Bill ID 1883494 2025-11-14 15:33:10,087 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:10,089 [ERROR] Failed to generate report for bill 1883494: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:11,104 [INFO] Processing 1080/2564: Bill ID 1883535 2025-11-14 15:33:11,845 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:11,846 [ERROR] Failed to generate report for bill 1883535: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:11,901 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:33:11,902 [INFO] Progress: 1080/2564 - Processed: 0, Skipped: 1035, Errors: 45 2025-11-14 15:33:12,912 [INFO] Processing 1081/2564: Bill ID 2038569 2025-11-14 15:33:13,925 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:13,927 [ERROR] Failed to generate report for bill 2038569: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:14,943 [INFO] Processing 1082/2564: Bill ID 2038571 2025-11-14 15:33:16,069 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:16,070 [ERROR] Failed to generate report for bill 2038571: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:17,085 [INFO] Skipping bill 1666814 - already processed (1083/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1722011 - already processed (1084/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1724398 - already processed (1085/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1676083 - already processed (1086/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1824011 - already processed (1087/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1824228 - already processed (1088/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1824028 - already processed (1089/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1834441 - already processed (1090/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1908238 - already processed (1091/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1967640 - already processed (1092/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1935448 - already processed (1093/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1987611 - already processed (1094/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1964156 - already processed (1095/2564) 2025-11-14 15:33:17,087 [INFO] Skipping bill 1947221 - already processed (1096/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1943110 - already processed (1097/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1964415 - already processed (1098/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1996731 - already processed (1099/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1944685 - already processed (1100/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1936020 - already processed (1101/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1947285 - already processed (1102/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1949498 - already processed (1103/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1933085 - already processed (1104/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1881403 - already processed (1105/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1878440 - already processed (1106/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1874641 - already processed (1107/2564) 2025-11-14 15:33:17,088 [INFO] Skipping bill 1780447 - already processed (1108/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1829313 - already processed (1109/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1876168 - already processed (1110/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1878357 - already processed (1111/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1801087 - already processed (1112/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1878533 - already processed (1113/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1781971 - already processed (1114/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1836944 - already processed (1115/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1773855 - already processed (1116/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1774758 - already processed (1117/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1779189 - already processed (1118/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1780403 - already processed (1119/2564) 2025-11-14 15:33:17,089 [INFO] Skipping bill 1882902 - already processed (1120/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1761023 - already processed (1121/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1763282 - already processed (1122/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1756406 - already processed (1123/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1721336 - already processed (1124/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1865663 - already processed (1125/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1884682 - already processed (1126/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1879124 - already processed (1127/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1813023 - already processed (1128/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1780572 - already processed (1129/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1796023 - already processed (1130/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1796213 - already processed (1131/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1841005 - already processed (1132/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1861287 - already processed (1133/2564) 2025-11-14 15:33:17,090 [INFO] Skipping bill 1878752 - already processed (1134/2564) 2025-11-14 15:33:17,091 [INFO] Skipping bill 1813101 - already processed (1135/2564) 2025-11-14 15:33:17,091 [INFO] Skipping bill 1768635 - already processed (1136/2564) 2025-11-14 15:33:17,091 [INFO] Skipping bill 1767924 - already processed (1137/2564) 2025-11-14 15:33:17,091 [INFO] Skipping bill 1641754 - already processed (1138/2564) 2025-11-14 15:33:17,091 [INFO] Skipping bill 1882889 - already processed (1139/2564) 2025-11-14 15:33:17,091 [INFO] Skipping bill 1729291 - already processed (1140/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1773906 - already processed (1141/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1839957 - already processed (1142/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1843965 - already processed (1143/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1879710 - already processed (1144/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1763606 - already processed (1145/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1780432 - already processed (1146/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1812765 - already processed (1147/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1836858 - already processed (1148/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1864293 - already processed (1149/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1770114 - already processed (1150/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1733127 - already processed (1151/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1762026 - already processed (1152/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1829537 - already processed (1153/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1878142 - already processed (1154/2564) 2025-11-14 15:33:17,092 [INFO] Skipping bill 1880765 - already processed (1155/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1762041 - already processed (1156/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1646230 - already processed (1157/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1762213 - already processed (1158/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1779393 - already processed (1159/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1878544 - already processed (1160/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1780459 - already processed (1161/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1781963 - already processed (1162/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1758293 - already processed (1163/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1768495 - already processed (1164/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1773860 - already processed (1165/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1864226 - already processed (1166/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1878400 - already processed (1167/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1879652 - already processed (1168/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1865798 - already processed (1169/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1862795 - already processed (1170/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1710243 - already processed (1171/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1818495 - already processed (1172/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1775864 - already processed (1173/2564) 2025-11-14 15:33:17,093 [INFO] Skipping bill 1856196 - already processed (1174/2564) 2025-11-14 15:33:17,094 [INFO] Skipping bill 1791835 - already processed (1175/2564) 2025-11-14 15:33:17,094 [INFO] Skipping bill 1658709 - already processed (1176/2564) 2025-11-14 15:33:17,094 [INFO] Skipping bill 1695187 - already processed (1177/2564) 2025-11-14 15:33:17,094 [INFO] Processing 1178/2564: Bill ID 1818780 2025-11-14 15:33:17,554 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:17,556 [ERROR] Failed to generate report for bill 1818780: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:18,573 [INFO] Processing 1179/2564: Bill ID 1818766 2025-11-14 15:33:19,089 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:19,090 [ERROR] Failed to generate report for bill 1818766: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:20,106 [INFO] Skipping bill 1752559 - already processed (1180/2564) 2025-11-14 15:33:20,107 [INFO] Skipping bill 1882942 - already processed (1181/2564) 2025-11-14 15:33:20,107 [INFO] Skipping bill 1766908 - already processed (1182/2564) 2025-11-14 15:33:20,107 [INFO] Skipping bill 1691064 - already processed (1183/2564) 2025-11-14 15:33:20,108 [INFO] Processing 1184/2564: Bill ID 1690030 2025-11-14 15:33:23,090 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:23,093 [ERROR] Failed to generate report for bill 1690030: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:24,108 [INFO] Processing 1185/2564: Bill ID 1690727 2025-11-14 15:33:26,251 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:26,252 [ERROR] Failed to generate report for bill 1690727: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:27,265 [INFO] Processing 1186/2564: Bill ID 1875409 2025-11-14 15:33:31,676 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:31,680 [ERROR] Failed to generate report for bill 1875409: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:32,694 [INFO] Processing 1187/2564: Bill ID 1835820 2025-11-14 15:33:37,207 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:37,207 [ERROR] Failed to generate report for bill 1835820: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:38,223 [INFO] Processing 1188/2564: Bill ID 1818459 2025-11-14 15:33:42,109 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:42,111 [ERROR] Failed to generate report for bill 1818459: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:43,126 [INFO] Skipping bill 2009915 - already processed (1189/2564) 2025-11-14 15:33:43,126 [INFO] Skipping bill 1917775 - already processed (1190/2564) 2025-11-14 15:33:43,126 [INFO] Skipping bill 1902981 - already processed (1191/2564) 2025-11-14 15:33:43,126 [INFO] Skipping bill 1908626 - already processed (1192/2564) 2025-11-14 15:33:43,126 [INFO] Skipping bill 1903647 - already processed (1193/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 1993863 - already processed (1194/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 2015656 - already processed (1195/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 1909120 - already processed (1196/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 2032707 - already processed (1197/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 2030838 - already processed (1198/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 2033110 - already processed (1199/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 2010112 - already processed (1200/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 1992712 - already processed (1201/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 2035218 - already processed (1202/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 1970759 - already processed (1203/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 1917262 - already processed (1204/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 1941920 - already processed (1205/2564) 2025-11-14 15:33:43,127 [INFO] Skipping bill 2015645 - already processed (1206/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 2041695 - already processed (1207/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 2038940 - already processed (1208/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 2043998 - already processed (1209/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 1903496 - already processed (1210/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 1942114 - already processed (1211/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 1948978 - already processed (1212/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 2025948 - already processed (1213/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 2030449 - already processed (1214/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 2012463 - already processed (1215/2564) 2025-11-14 15:33:43,128 [INFO] Skipping bill 2036382 - already processed (1216/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 1901571 - already processed (1217/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 1902589 - already processed (1218/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 2045075 - already processed (1219/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 2042397 - already processed (1220/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 1995988 - already processed (1221/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 1941987 - already processed (1222/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 2005892 - already processed (1223/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 2030765 - already processed (1224/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 2032658 - already processed (1225/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 1934862 - already processed (1226/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 1900450 - already processed (1227/2564) 2025-11-14 15:33:43,129 [INFO] Skipping bill 1954914 - already processed (1228/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 1908970 - already processed (1229/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 2046810 - already processed (1230/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 1911503 - already processed (1231/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 1917449 - already processed (1232/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 2012421 - already processed (1233/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 2036409 - already processed (1234/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 1930912 - already processed (1235/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 2015571 - already processed (1236/2564) 2025-11-14 15:33:43,130 [INFO] Skipping bill 1909237 - already processed (1237/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 1991849 - already processed (1238/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 2032681 - already processed (1239/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 2031449 - already processed (1240/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 1907396 - already processed (1241/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 2036417 - already processed (1242/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 2010242 - already processed (1243/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 1902485 - already processed (1244/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 2044029 - already processed (1245/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 2039479 - already processed (1246/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 1927014 - already processed (1247/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 1993679 - already processed (1248/2564) 2025-11-14 15:33:43,131 [INFO] Skipping bill 2012390 - already processed (1249/2564) 2025-11-14 15:33:43,132 [INFO] Skipping bill 1967476 - already processed (1250/2564) 2025-11-14 15:33:43,132 [INFO] Skipping bill 2039584 - already processed (1251/2564) 2025-11-14 15:33:43,132 [INFO] Skipping bill 1941925 - already processed (1252/2564) 2025-11-14 15:33:43,132 [INFO] Skipping bill 2039602 - already processed (1253/2564) 2025-11-14 15:33:43,132 [INFO] Skipping bill 2021091 - already processed (1254/2564) 2025-11-14 15:33:43,132 [INFO] Skipping bill 1993748 - already processed (1255/2564) 2025-11-14 15:33:43,132 [INFO] Skipping bill 2043429 - already processed (1256/2564) 2025-11-14 15:33:43,132 [INFO] Skipping bill 1907408 - already processed (1257/2564) 2025-11-14 15:33:43,133 [INFO] Skipping bill 2036445 - already processed (1258/2564) 2025-11-14 15:33:43,133 [INFO] Skipping bill 1948575 - already processed (1259/2564) 2025-11-14 15:33:43,133 [INFO] Skipping bill 2020539 - already processed (1260/2564) 2025-11-14 15:33:43,133 [INFO] Skipping bill 1941981 - already processed (1261/2564) 2025-11-14 15:33:43,133 [INFO] Skipping bill 1985057 - already processed (1262/2564) 2025-11-14 15:33:43,133 [INFO] Skipping bill 2012554 - already processed (1263/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 1900469 - already processed (1264/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 1949091 - already processed (1265/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 1903302 - already processed (1266/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 2031820 - already processed (1267/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 1986509 - already processed (1268/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 1992147 - already processed (1269/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 1908565 - already processed (1270/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 2018195 - already processed (1271/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 1948655 - already processed (1272/2564) 2025-11-14 15:33:43,134 [INFO] Skipping bill 1926957 - already processed (1273/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1909167 - already processed (1274/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1910683 - already processed (1275/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1918276 - already processed (1276/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1942634 - already processed (1277/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1947885 - already processed (1278/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 2034828 - already processed (1279/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 2035534 - already processed (1280/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1937370 - already processed (1281/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 2036328 - already processed (1282/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1940048 - already processed (1283/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 2007650 - already processed (1284/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1938062 - already processed (1285/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1990212 - already processed (1286/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1995017 - already processed (1287/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1937257 - already processed (1288/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1900853 - already processed (1289/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1947971 - already processed (1290/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1920984 - already processed (1291/2564) 2025-11-14 15:33:43,135 [INFO] Skipping bill 1902725 - already processed (1292/2564) 2025-11-14 15:33:43,136 [INFO] Skipping bill 1964016 - already processed (1293/2564) 2025-11-14 15:33:43,136 [INFO] Processing 1294/2564: Bill ID 1934576 2025-11-14 15:33:45,256 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:45,257 [ERROR] Failed to generate report for bill 1934576: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:46,273 [INFO] Skipping bill 1898800 - already processed (1295/2564) 2025-11-14 15:33:46,273 [INFO] Skipping bill 1971511 - already processed (1296/2564) 2025-11-14 15:33:46,273 [INFO] Processing 1297/2564: Bill ID 1935197 2025-11-14 15:33:47,042 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:47,044 [ERROR] Failed to generate report for bill 1935197: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:48,061 [INFO] Processing 1298/2564: Bill ID 1935040 2025-11-14 15:33:48,899 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:48,901 [ERROR] Failed to generate report for bill 1935040: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:49,914 [INFO] Skipping bill 1948521 - already processed (1299/2564) 2025-11-14 15:33:49,915 [INFO] Skipping bill 1977652 - already processed (1300/2564) 2025-11-14 15:33:49,915 [INFO] Processing 1301/2564: Bill ID 1934805 2025-11-14 15:33:50,623 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:50,626 [ERROR] Failed to generate report for bill 1934805: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:51,643 [INFO] Skipping bill 1934970 - already processed (1302/2564) 2025-11-14 15:33:51,644 [INFO] Skipping bill 1934701 - already processed (1303/2564) 2025-11-14 15:33:51,645 [INFO] Skipping bill 1942260 - already processed (1304/2564) 2025-11-14 15:33:51,645 [INFO] Skipping bill 1917391 - already processed (1305/2564) 2025-11-14 15:33:51,645 [INFO] Processing 1306/2564: Bill ID 1935190 2025-11-14 15:33:57,344 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:33:57,347 [ERROR] Failed to generate report for bill 1935190: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:33:58,363 [INFO] Processing 1307/2564: Bill ID 1934636 2025-11-14 15:34:00,416 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:00,418 [ERROR] Failed to generate report for bill 1934636: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:01,436 [INFO] Processing 1308/2564: Bill ID 1935223 2025-11-14 15:34:03,815 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:03,817 [ERROR] Failed to generate report for bill 1935223: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:04,832 [INFO] Processing 1309/2564: Bill ID 1934824 2025-11-14 15:34:08,768 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:08,771 [ERROR] Failed to generate report for bill 1934824: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:09,787 [INFO] Skipping bill 1879932 - already processed (1310/2564) 2025-11-14 15:34:09,788 [INFO] Skipping bill 1875738 - already processed (1311/2564) 2025-11-14 15:34:09,788 [INFO] Skipping bill 1875815 - already processed (1312/2564) 2025-11-14 15:34:09,789 [INFO] Skipping bill 1701253 - already processed (1313/2564) 2025-11-14 15:34:09,789 [INFO] Skipping bill 1875615 - already processed (1314/2564) 2025-11-14 15:34:09,789 [INFO] Skipping bill 1754315 - already processed (1315/2564) 2025-11-14 15:34:09,789 [INFO] Skipping bill 1751005 - already processed (1316/2564) 2025-11-14 15:34:09,790 [INFO] Skipping bill 1875642 - already processed (1317/2564) 2025-11-14 15:34:09,790 [INFO] Skipping bill 1753811 - already processed (1318/2564) 2025-11-14 15:34:09,790 [INFO] Skipping bill 1752050 - already processed (1319/2564) 2025-11-14 15:34:09,790 [INFO] Skipping bill 1704591 - already processed (1320/2564) 2025-11-14 15:34:09,790 [INFO] Skipping bill 1748551 - already processed (1321/2564) 2025-11-14 15:34:09,790 [INFO] Skipping bill 1725321 - already processed (1322/2564) 2025-11-14 15:34:09,790 [INFO] Skipping bill 1725195 - already processed (1323/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2014434 - already processed (1324/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2014277 - already processed (1325/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2000124 - already processed (1326/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2022736 - already processed (1327/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2022881 - already processed (1328/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2014322 - already processed (1329/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2014068 - already processed (1330/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2005730 - already processed (1331/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2014594 - already processed (1332/2564) 2025-11-14 15:34:09,791 [INFO] Skipping bill 2013131 - already processed (1333/2564) 2025-11-14 15:34:09,792 [INFO] Skipping bill 2022220 - already processed (1334/2564) 2025-11-14 15:34:09,792 [INFO] Skipping bill 2008986 - already processed (1335/2564) 2025-11-14 15:34:09,792 [INFO] Skipping bill 2013796 - already processed (1336/2564) 2025-11-14 15:34:09,792 [INFO] Skipping bill 2014312 - already processed (1337/2564) 2025-11-14 15:34:09,792 [INFO] Skipping bill 2013903 - already processed (1338/2564) 2025-11-14 15:34:09,792 [INFO] Skipping bill 2013936 - already processed (1339/2564) 2025-11-14 15:34:09,792 [INFO] Skipping bill 2013868 - already processed (1340/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2014024 - already processed (1341/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2014377 - already processed (1342/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2017695 - already processed (1343/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2018632 - already processed (1344/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2022666 - already processed (1345/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2022828 - already processed (1346/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2015551 - already processed (1347/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2009244 - already processed (1348/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 1969116 - already processed (1349/2564) 2025-11-14 15:34:09,793 [INFO] Skipping bill 2009761 - already processed (1350/2564) 2025-11-14 15:34:09,793 [INFO] Processing 1351/2564: Bill ID 2012916 2025-11-14 15:34:10,549 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:10,551 [ERROR] Failed to generate report for bill 2012916: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:11,565 [INFO] Skipping bill 1996111 - already processed (1352/2564) 2025-11-14 15:34:11,567 [INFO] Skipping bill 1656324 - already processed (1353/2564) 2025-11-14 15:34:11,567 [INFO] Skipping bill 1640560 - already processed (1354/2564) 2025-11-14 15:34:11,567 [INFO] Skipping bill 1644790 - already processed (1355/2564) 2025-11-14 15:34:11,567 [INFO] Skipping bill 1908973 - already processed (1356/2564) 2025-11-14 15:34:11,567 [INFO] Skipping bill 1930471 - already processed (1357/2564) 2025-11-14 15:34:11,568 [INFO] Skipping bill 1916131 - already processed (1358/2564) 2025-11-14 15:34:11,568 [INFO] Skipping bill 1916897 - already processed (1359/2564) 2025-11-14 15:34:11,568 [INFO] Skipping bill 1930219 - already processed (1360/2564) 2025-11-14 15:34:11,568 [INFO] Skipping bill 1916725 - already processed (1361/2564) 2025-11-14 15:34:11,568 [INFO] Skipping bill 1916697 - already processed (1362/2564) 2025-11-14 15:34:11,568 [INFO] Skipping bill 1921549 - already processed (1363/2564) 2025-11-14 15:34:11,569 [INFO] Skipping bill 1916032 - already processed (1364/2564) 2025-11-14 15:34:11,569 [INFO] Skipping bill 1915939 - already processed (1365/2564) 2025-11-14 15:34:11,569 [INFO] Skipping bill 1899315 - already processed (1366/2564) 2025-11-14 15:34:11,569 [INFO] Skipping bill 1930747 - already processed (1367/2564) 2025-11-14 15:34:11,569 [INFO] Skipping bill 1898936 - already processed (1368/2564) 2025-11-14 15:34:11,569 [INFO] Skipping bill 1828241 - already processed (1369/2564) 2025-11-14 15:34:11,569 [INFO] Skipping bill 1784887 - already processed (1370/2564) 2025-11-14 15:34:11,569 [INFO] Processing 1371/2564: Bill ID 1710984 2025-11-14 15:34:18,267 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:18,270 [ERROR] Failed to generate report for bill 1710984: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:19,280 [INFO] Processing 1372/2564: Bill ID 1710996 2025-11-14 15:34:22,467 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:22,469 [ERROR] Failed to generate report for bill 1710996: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:23,487 [INFO] Processing 1373/2564: Bill ID 1659671 2025-11-14 15:34:26,355 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:26,356 [ERROR] Failed to generate report for bill 1659671: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:27,369 [INFO] Skipping bill 2046561 - already processed (1374/2564) 2025-11-14 15:34:27,370 [INFO] Skipping bill 2018937 - already processed (1375/2564) 2025-11-14 15:34:27,370 [INFO] Skipping bill 2046538 - already processed (1376/2564) 2025-11-14 15:34:27,371 [INFO] Skipping bill 2038933 - already processed (1377/2564) 2025-11-14 15:34:27,371 [INFO] Skipping bill 2019064 - already processed (1378/2564) 2025-11-14 15:34:27,371 [INFO] Skipping bill 1973495 - already processed (1379/2564) 2025-11-14 15:34:27,371 [INFO] Skipping bill 2044900 - already processed (1380/2564) 2025-11-14 15:34:27,371 [INFO] Skipping bill 2036911 - already processed (1381/2564) 2025-11-14 15:34:27,371 [INFO] Skipping bill 1956347 - already processed (1382/2564) 2025-11-14 15:34:27,371 [INFO] Skipping bill 2015680 - already processed (1383/2564) 2025-11-14 15:34:27,371 [INFO] Skipping bill 2035837 - already processed (1384/2564) 2025-11-14 15:34:27,371 [INFO] Processing 1385/2564: Bill ID 1966320 2025-11-14 15:34:34,278 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:34,280 [ERROR] Failed to generate report for bill 1966320: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:35,298 [INFO] Processing 1386/2564: Bill ID 2044413 2025-11-14 15:34:36,211 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:36,213 [ERROR] Failed to generate report for bill 2044413: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280919 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280919 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:37,235 [INFO] Processing 1387/2564: Bill ID 2031116 2025-11-14 15:34:38,983 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:38,985 [ERROR] Failed to generate report for bill 2031116: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:40,002 [INFO] Skipping bill 1820171 - already processed (1388/2564) 2025-11-14 15:34:40,002 [INFO] Skipping bill 1820684 - already processed (1389/2564) 2025-11-14 15:34:40,002 [INFO] Skipping bill 1820075 - already processed (1390/2564) 2025-11-14 15:34:40,002 [INFO] Skipping bill 1820478 - already processed (1391/2564) 2025-11-14 15:34:40,002 [INFO] Skipping bill 1820697 - already processed (1392/2564) 2025-11-14 15:34:40,003 [INFO] Skipping bill 1821348 - already processed (1393/2564) 2025-11-14 15:34:40,003 [INFO] Skipping bill 1819421 - already processed (1394/2564) 2025-11-14 15:34:40,003 [INFO] Skipping bill 1820795 - already processed (1395/2564) 2025-11-14 15:34:40,003 [INFO] Skipping bill 1814318 - already processed (1396/2564) 2025-11-14 15:34:40,003 [INFO] Skipping bill 1814441 - already processed (1397/2564) 2025-11-14 15:34:40,003 [INFO] Skipping bill 1791289 - already processed (1398/2564) 2025-11-14 15:34:40,003 [INFO] Skipping bill 1789468 - already processed (1399/2564) 2025-11-14 15:34:40,004 [INFO] Skipping bill 1924199 - already processed (1400/2564) 2025-11-14 15:34:40,004 [INFO] Skipping bill 1920208 - already processed (1401/2564) 2025-11-14 15:34:40,004 [INFO] Skipping bill 1920320 - already processed (1402/2564) 2025-11-14 15:34:40,004 [INFO] Skipping bill 1923586 - already processed (1403/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1918327 - already processed (1404/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1922702 - already processed (1405/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1923122 - already processed (1406/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1924269 - already processed (1407/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1925220 - already processed (1408/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1924640 - already processed (1409/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1924912 - already processed (1410/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1900252 - already processed (1411/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 2018241 - already processed (1412/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1920876 - already processed (1413/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1920720 - already processed (1414/2564) 2025-11-14 15:34:40,005 [INFO] Skipping bill 1925546 - already processed (1415/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1903378 - already processed (1416/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1921990 - already processed (1417/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1922805 - already processed (1418/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1922842 - already processed (1419/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1836006 - already processed (1420/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1836109 - already processed (1421/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1843504 - already processed (1422/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1973003 - already processed (1423/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 2009609 - already processed (1424/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1986214 - already processed (1425/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1912749 - already processed (1426/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1914095 - already processed (1427/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1914598 - already processed (1428/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1913104 - already processed (1429/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1914569 - already processed (1430/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1930373 - already processed (1431/2564) 2025-11-14 15:34:40,006 [INFO] Skipping bill 1982090 - already processed (1432/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1914274 - already processed (1433/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1982120 - already processed (1434/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1773806 - already processed (1435/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1880673 - already processed (1436/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1724997 - already processed (1437/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1775230 - already processed (1438/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1889846 - already processed (1439/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1773451 - already processed (1440/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1759469 - already processed (1441/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1777407 - already processed (1442/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1880554 - already processed (1443/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1854268 - already processed (1444/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1771135 - already processed (1445/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1830478 - already processed (1446/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1780085 - already processed (1447/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1858003 - already processed (1448/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1880735 - already processed (1449/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1882950 - already processed (1450/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1878925 - already processed (1451/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1878252 - already processed (1452/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1884263 - already processed (1453/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1873862 - already processed (1454/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1882265 - already processed (1455/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1771247 - already processed (1456/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1836612 - already processed (1457/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1820748 - already processed (1458/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1886418 - already processed (1459/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1769931 - already processed (1460/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1740020 - already processed (1461/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1878961 - already processed (1462/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1768592 - already processed (1463/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2045757 - already processed (1464/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2030536 - already processed (1465/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2047301 - already processed (1466/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2039357 - already processed (1467/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2034685 - already processed (1468/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2037642 - already processed (1469/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2022168 - already processed (1470/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1937863 - already processed (1471/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2043639 - already processed (1472/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 2012593 - already processed (1473/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1947924 - already processed (1474/2564) 2025-11-14 15:34:40,007 [INFO] Skipping bill 1991206 - already processed (1475/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2012408 - already processed (1476/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2021116 - already processed (1477/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 1973751 - already processed (1478/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2045246 - already processed (1479/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 1910852 - already processed (1480/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 1956391 - already processed (1481/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2023404 - already processed (1482/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2035307 - already processed (1483/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 1944456 - already processed (1484/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2041064 - already processed (1485/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2039278 - already processed (1486/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2041823 - already processed (1487/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 2038442 - already processed (1488/2564) 2025-11-14 15:34:40,008 [INFO] Skipping bill 1905925 - already processed (1489/2564) 2025-11-14 15:34:40,008 [INFO] Processing 1490/2564: Bill ID 2041076 2025-11-14 15:34:43,121 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:43,122 [ERROR] Failed to generate report for bill 2041076: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:43,173 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:34:43,174 [INFO] Progress: 1490/2564 - Processed: 0, Skipped: 1420, Errors: 70 2025-11-14 15:34:44,184 [INFO] Processing 1491/2564: Bill ID 2037948 2025-11-14 15:34:44,685 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:44,688 [ERROR] Failed to generate report for bill 2037948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136836 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136836 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:45,703 [INFO] Skipping bill 1757100 - already processed (1492/2564) 2025-11-14 15:34:45,704 [INFO] Skipping bill 1766918 - already processed (1493/2564) 2025-11-14 15:34:45,704 [INFO] Skipping bill 1691606 - already processed (1494/2564) 2025-11-14 15:34:45,704 [INFO] Skipping bill 1757087 - already processed (1495/2564) 2025-11-14 15:34:45,704 [INFO] Skipping bill 1691984 - already processed (1496/2564) 2025-11-14 15:34:45,704 [INFO] Skipping bill 1724146 - already processed (1497/2564) 2025-11-14 15:34:45,704 [INFO] Skipping bill 1811367 - already processed (1498/2564) 2025-11-14 15:34:45,704 [INFO] Skipping bill 1864559 - already processed (1499/2564) 2025-11-14 15:34:45,705 [INFO] Skipping bill 1833383 - already processed (1500/2564) 2025-11-14 15:34:45,705 [INFO] Skipping bill 1839979 - already processed (1501/2564) 2025-11-14 15:34:45,705 [INFO] Skipping bill 1863636 - already processed (1502/2564) 2025-11-14 15:34:45,705 [INFO] Skipping bill 1866932 - already processed (1503/2564) 2025-11-14 15:34:45,705 [INFO] Skipping bill 1829566 - already processed (1504/2564) 2025-11-14 15:34:45,705 [INFO] Skipping bill 1858179 - already processed (1505/2564) 2025-11-14 15:34:45,705 [INFO] Skipping bill 1857154 - already processed (1506/2564) 2025-11-14 15:34:45,705 [INFO] Skipping bill 1866872 - already processed (1507/2564) 2025-11-14 15:34:45,706 [INFO] Skipping bill 1844272 - already processed (1508/2564) 2025-11-14 15:34:45,706 [INFO] Skipping bill 1875576 - already processed (1509/2564) 2025-11-14 15:34:45,706 [INFO] Skipping bill 1875933 - already processed (1510/2564) 2025-11-14 15:34:45,706 [INFO] Skipping bill 1844730 - already processed (1511/2564) 2025-11-14 15:34:45,706 [INFO] Skipping bill 1858971 - already processed (1512/2564) 2025-11-14 15:34:45,706 [INFO] Skipping bill 1870027 - already processed (1513/2564) 2025-11-14 15:34:45,706 [INFO] Skipping bill 1994761 - already processed (1514/2564) 2025-11-14 15:34:45,706 [INFO] Skipping bill 1935080 - already processed (1515/2564) 2025-11-14 15:34:45,707 [INFO] Skipping bill 1945535 - already processed (1516/2564) 2025-11-14 15:34:45,708 [INFO] Skipping bill 1979504 - already processed (1517/2564) 2025-11-14 15:34:45,709 [INFO] Skipping bill 1937835 - already processed (1518/2564) 2025-11-14 15:34:45,709 [INFO] Skipping bill 1918971 - already processed (1519/2564) 2025-11-14 15:34:45,710 [INFO] Skipping bill 1986390 - already processed (1520/2564) 2025-11-14 15:34:45,710 [INFO] Skipping bill 1945988 - already processed (1521/2564) 2025-11-14 15:34:45,710 [INFO] Skipping bill 1940828 - already processed (1522/2564) 2025-11-14 15:34:45,710 [INFO] Skipping bill 1986602 - already processed (1523/2564) 2025-11-14 15:34:45,710 [INFO] Skipping bill 1988979 - already processed (1524/2564) 2025-11-14 15:34:45,711 [INFO] Skipping bill 2008057 - already processed (1525/2564) 2025-11-14 15:34:45,711 [INFO] Skipping bill 1986556 - already processed (1526/2564) 2025-11-14 15:34:45,711 [INFO] Skipping bill 1986569 - already processed (1527/2564) 2025-11-14 15:34:45,711 [INFO] Skipping bill 1988788 - already processed (1528/2564) 2025-11-14 15:34:45,711 [INFO] Skipping bill 2028551 - already processed (1529/2564) 2025-11-14 15:34:45,711 [INFO] Skipping bill 1937524 - already processed (1530/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 1966994 - already processed (1531/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 2030023 - already processed (1532/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 1988713 - already processed (1533/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 1988914 - already processed (1534/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 2030055 - already processed (1535/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 1666116 - already processed (1536/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 1792231 - already processed (1537/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 1802681 - already processed (1538/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 1921522 - already processed (1539/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 1999928 - already processed (1540/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 2022730 - already processed (1541/2564) 2025-11-14 15:34:45,712 [INFO] Skipping bill 2024009 - already processed (1542/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1895318 - already processed (1543/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1944028 - already processed (1544/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1954350 - already processed (1545/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1954733 - already processed (1546/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 2029172 - already processed (1547/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1944096 - already processed (1548/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1895182 - already processed (1549/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1919972 - already processed (1550/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1895637 - already processed (1551/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1819620 - already processed (1552/2564) 2025-11-14 15:34:45,713 [INFO] Skipping bill 1811138 - already processed (1553/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1948251 - already processed (1554/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1901594 - already processed (1555/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1833554 - already processed (1556/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1833050 - already processed (1557/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1830912 - already processed (1558/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1834207 - already processed (1559/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1795187 - already processed (1560/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1828458 - already processed (1561/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1808304 - already processed (1562/2564) 2025-11-14 15:34:45,714 [INFO] Skipping bill 1834240 - already processed (1563/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1831671 - already processed (1564/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1832378 - already processed (1565/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1828742 - already processed (1566/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1833429 - already processed (1567/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1828784 - already processed (1568/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1825620 - already processed (1569/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1799785 - already processed (1570/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1832466 - already processed (1571/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1831669 - already processed (1572/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1832147 - already processed (1573/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1831971 - already processed (1574/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1832437 - already processed (1575/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1828244 - already processed (1576/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1833731 - already processed (1577/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1833264 - already processed (1578/2564) 2025-11-14 15:34:45,715 [INFO] Skipping bill 1833393 - already processed (1579/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1825869 - already processed (1580/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1825916 - already processed (1581/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1873399 - already processed (1582/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1826595 - already processed (1583/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1832185 - already processed (1584/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1832434 - already processed (1585/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1831535 - already processed (1586/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1834179 - already processed (1587/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1834106 - already processed (1588/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1946381 - already processed (1589/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1953992 - already processed (1590/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1948149 - already processed (1591/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1959470 - already processed (1592/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1946783 - already processed (1593/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1955110 - already processed (1594/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1959302 - already processed (1595/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1959458 - already processed (1596/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1960722 - already processed (1597/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1951003 - already processed (1598/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1954702 - already processed (1599/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1954311 - already processed (1600/2564) 2025-11-14 15:34:45,716 [INFO] Skipping bill 1959312 - already processed (1601/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1959377 - already processed (1602/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1954015 - already processed (1603/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1954357 - already processed (1604/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1944274 - already processed (1605/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1944487 - already processed (1606/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1959723 - already processed (1607/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1960832 - already processed (1608/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1971015 - already processed (1609/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1971366 - already processed (1610/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1733375 - already processed (1611/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1700527 - already processed (1612/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1719413 - already processed (1613/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1694457 - already processed (1614/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1744060 - already processed (1615/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1727826 - already processed (1616/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1743424 - already processed (1617/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1732248 - already processed (1618/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1731629 - already processed (1619/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1769317 - already processed (1620/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1747471 - already processed (1621/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1747557 - already processed (1622/2564) 2025-11-14 15:34:45,717 [INFO] Skipping bill 1710763 - already processed (1623/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1782999 - already processed (1624/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1781207 - already processed (1625/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1726065 - already processed (1626/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1898826 - already processed (1627/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1992725 - already processed (1628/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1988473 - already processed (1629/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1970030 - already processed (1630/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 2007109 - already processed (1631/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1891805 - already processed (1632/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1949957 - already processed (1633/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1990181 - already processed (1634/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1991711 - already processed (1635/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1897779 - already processed (1636/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 2006851 - already processed (1637/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1975361 - already processed (1638/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1987235 - already processed (1639/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 2007736 - already processed (1640/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 2000200 - already processed (1641/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1923991 - already processed (1642/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 1892858 - already processed (1643/2564) 2025-11-14 15:34:45,718 [INFO] Skipping bill 2000248 - already processed (1644/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1971072 - already processed (1645/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2008077 - already processed (1646/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1907668 - already processed (1647/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1962916 - already processed (1648/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2005286 - already processed (1649/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2005181 - already processed (1650/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1891063 - already processed (1651/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1900186 - already processed (1652/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1994657 - already processed (1653/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2008307 - already processed (1654/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1991260 - already processed (1655/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2006384 - already processed (1656/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2002051 - already processed (1657/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1973236 - already processed (1658/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2007316 - already processed (1659/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1890894 - already processed (1660/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2000178 - already processed (1661/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1982970 - already processed (1662/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 2006497 - already processed (1663/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1890775 - already processed (1664/2564) 2025-11-14 15:34:45,719 [INFO] Skipping bill 1892224 - already processed (1665/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1954141 - already processed (1666/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2006579 - already processed (1667/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2006128 - already processed (1668/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2024097 - already processed (1669/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2034878 - already processed (1670/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1891396 - already processed (1671/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2040103 - already processed (1672/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2041986 - already processed (1673/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1987712 - already processed (1674/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2005998 - already processed (1675/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2008318 - already processed (1676/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1892843 - already processed (1677/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1946392 - already processed (1678/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1971169 - already processed (1679/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1890786 - already processed (1680/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1891256 - already processed (1681/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1942882 - already processed (1682/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2031981 - already processed (1683/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2033602 - already processed (1684/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 2034279 - already processed (1685/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1974704 - already processed (1686/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1950849 - already processed (1687/2564) 2025-11-14 15:34:45,720 [INFO] Skipping bill 1975022 - already processed (1688/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1981850 - already processed (1689/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1890492 - already processed (1690/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 2020803 - already processed (1691/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 2005343 - already processed (1692/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1890466 - already processed (1693/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1975612 - already processed (1694/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1994176 - already processed (1695/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1990550 - already processed (1696/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1891411 - already processed (1697/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1983542 - already processed (1698/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1999872 - already processed (1699/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 2007449 - already processed (1700/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 2039972 - already processed (1701/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1892428 - already processed (1702/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1891501 - already processed (1703/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 2007840 - already processed (1704/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1976041 - already processed (1705/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1992763 - already processed (1706/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1993770 - already processed (1707/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 2007872 - already processed (1708/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1936766 - already processed (1709/2564) 2025-11-14 15:34:45,721 [INFO] Skipping bill 1676049 - already processed (1710/2564) 2025-11-14 15:34:45,721 [INFO] Processing 1711/2564: Bill ID 1704512 2025-11-14 15:34:46,583 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:46,586 [ERROR] Failed to generate report for bill 1704512: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:47,602 [INFO] Skipping bill 1828750 - already processed (1712/2564) 2025-11-14 15:34:47,604 [INFO] Skipping bill 1823594 - already processed (1713/2564) 2025-11-14 15:34:47,604 [INFO] Skipping bill 1820331 - already processed (1714/2564) 2025-11-14 15:34:47,604 [INFO] Skipping bill 1810219 - already processed (1715/2564) 2025-11-14 15:34:47,605 [INFO] Skipping bill 1813477 - already processed (1716/2564) 2025-11-14 15:34:47,605 [INFO] Skipping bill 1858814 - already processed (1717/2564) 2025-11-14 15:34:47,605 [INFO] Skipping bill 1882805 - already processed (1718/2564) 2025-11-14 15:34:47,605 [INFO] Skipping bill 1811586 - already processed (1719/2564) 2025-11-14 15:34:47,605 [INFO] Skipping bill 1794392 - already processed (1720/2564) 2025-11-14 15:34:47,606 [INFO] Processing 1721/2564: Bill ID 1844899 2025-11-14 15:34:48,141 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:48,144 [ERROR] Failed to generate report for bill 1844899: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:49,161 [INFO] Skipping bill 1954171 - already processed (1722/2564) 2025-11-14 15:34:49,161 [INFO] Skipping bill 1911041 - already processed (1723/2564) 2025-11-14 15:34:49,161 [INFO] Skipping bill 1963098 - already processed (1724/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1943827 - already processed (1725/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1968353 - already processed (1726/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1981617 - already processed (1727/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1995499 - already processed (1728/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1954569 - already processed (1729/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1950395 - already processed (1730/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1989323 - already processed (1731/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1904576 - already processed (1732/2564) 2025-11-14 15:34:49,162 [INFO] Skipping bill 1968434 - already processed (1733/2564) 2025-11-14 15:34:49,163 [INFO] Processing 1734/2564: Bill ID 2046115 2025-11-14 15:34:50,582 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:50,586 [ERROR] Failed to generate report for bill 2046115: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:51,601 [INFO] Skipping bill 1912099 - already processed (1735/2564) 2025-11-14 15:34:51,602 [INFO] Skipping bill 1946923 - already processed (1736/2564) 2025-11-14 15:34:51,602 [INFO] Processing 1737/2564: Bill ID 2046119 2025-11-14 15:34:52,377 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:52,379 [ERROR] Failed to generate report for bill 2046119: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:53,393 [INFO] Processing 1738/2564: Bill ID 1897901 2025-11-14 15:34:54,808 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:54,811 [ERROR] Failed to generate report for bill 1897901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:55,829 [INFO] Processing 1739/2564: Bill ID 1948482 2025-11-14 15:34:56,726 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:56,728 [ERROR] Failed to generate report for bill 1948482: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:34:57,743 [INFO] Skipping bill 1800317 - already processed (1740/2564) 2025-11-14 15:34:57,744 [INFO] Skipping bill 1800156 - already processed (1741/2564) 2025-11-14 15:34:57,744 [INFO] Skipping bill 1854552 - already processed (1742/2564) 2025-11-14 15:34:57,744 [INFO] Skipping bill 1680053 - already processed (1743/2564) 2025-11-14 15:34:57,744 [INFO] Skipping bill 1682772 - already processed (1744/2564) 2025-11-14 15:34:57,744 [INFO] Skipping bill 1737434 - already processed (1745/2564) 2025-11-14 15:34:57,744 [INFO] Skipping bill 1981655 - already processed (1746/2564) 2025-11-14 15:34:57,744 [INFO] Skipping bill 1982851 - already processed (1747/2564) 2025-11-14 15:34:57,745 [INFO] Skipping bill 1934587 - already processed (1748/2564) 2025-11-14 15:34:57,745 [INFO] Skipping bill 1981303 - already processed (1749/2564) 2025-11-14 15:34:57,745 [INFO] Skipping bill 1983676 - already processed (1750/2564) 2025-11-14 15:34:57,745 [INFO] Skipping bill 1969845 - already processed (1751/2564) 2025-11-14 15:34:57,745 [INFO] Skipping bill 1983355 - already processed (1752/2564) 2025-11-14 15:34:57,745 [INFO] Skipping bill 2009795 - already processed (1753/2564) 2025-11-14 15:34:57,745 [INFO] Skipping bill 1973485 - already processed (1754/2564) 2025-11-14 15:34:57,745 [INFO] Skipping bill 1967494 - already processed (1755/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1973283 - already processed (1756/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1639846 - already processed (1757/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1646426 - already processed (1758/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1673591 - already processed (1759/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1639749 - already processed (1760/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1655379 - already processed (1761/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1630766 - already processed (1762/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1630878 - already processed (1763/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1630898 - already processed (1764/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1645265 - already processed (1765/2564) 2025-11-14 15:34:57,746 [INFO] Skipping bill 1650459 - already processed (1766/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1645172 - already processed (1767/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1630804 - already processed (1768/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1630761 - already processed (1769/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1652712 - already processed (1770/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1633968 - already processed (1771/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1644865 - already processed (1772/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1645061 - already processed (1773/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1809843 - already processed (1774/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1811981 - already processed (1775/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1812040 - already processed (1776/2564) 2025-11-14 15:34:57,747 [INFO] Skipping bill 1798563 - already processed (1777/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1807894 - already processed (1778/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1798580 - already processed (1779/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1800951 - already processed (1780/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1808295 - already processed (1781/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1799462 - already processed (1782/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1808024 - already processed (1783/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1807991 - already processed (1784/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1812376 - already processed (1785/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1822475 - already processed (1786/2564) 2025-11-14 15:34:57,748 [INFO] Skipping bill 1811644 - already processed (1787/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1794980 - already processed (1788/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1808264 - already processed (1789/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1801793 - already processed (1790/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1799221 - already processed (1791/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1822208 - already processed (1792/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1800673 - already processed (1793/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1809026 - already processed (1794/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1812182 - already processed (1795/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1886330 - already processed (1796/2564) 2025-11-14 15:34:57,749 [INFO] Skipping bill 1904645 - already processed (1797/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1911036 - already processed (1798/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1904674 - already processed (1799/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1901323 - already processed (1800/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1904347 - already processed (1801/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1925485 - already processed (1802/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1886222 - already processed (1803/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1905613 - already processed (1804/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1912330 - already processed (1805/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1914968 - already processed (1806/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1925408 - already processed (1807/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1886065 - already processed (1808/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1905445 - already processed (1809/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1905965 - already processed (1810/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1886188 - already processed (1811/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1905894 - already processed (1812/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1912145 - already processed (1813/2564) 2025-11-14 15:34:57,750 [INFO] Skipping bill 1927784 - already processed (1814/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1941702 - already processed (1815/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1929947 - already processed (1816/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1905942 - already processed (1817/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1912012 - already processed (1818/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1905698 - already processed (1819/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1886051 - already processed (1820/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1932239 - already processed (1821/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1932502 - already processed (1822/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1885937 - already processed (1823/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1900803 - already processed (1824/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1905712 - already processed (1825/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1905995 - already processed (1826/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1902641 - already processed (1827/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1905891 - already processed (1828/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1905860 - already processed (1829/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1908254 - already processed (1830/2564) 2025-11-14 15:34:57,751 [INFO] Skipping bill 1905920 - already processed (1831/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1886241 - already processed (1832/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1886007 - already processed (1833/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1896347 - already processed (1834/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1905982 - already processed (1835/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1898426 - already processed (1836/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1791614 - already processed (1837/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1792210 - already processed (1838/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1825997 - already processed (1839/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1792205 - already processed (1840/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1801141 - already processed (1841/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1796759 - already processed (1842/2564) 2025-11-14 15:34:57,752 [INFO] Skipping bill 1794124 - already processed (1843/2564) 2025-11-14 15:34:57,753 [INFO] Skipping bill 1680711 - already processed (1844/2564) 2025-11-14 15:34:57,753 [INFO] Skipping bill 1686234 - already processed (1845/2564) 2025-11-14 15:34:57,753 [INFO] Skipping bill 1813390 - already processed (1846/2564) 2025-11-14 15:34:57,753 [INFO] Skipping bill 1797745 - already processed (1847/2564) 2025-11-14 15:34:57,753 [INFO] Skipping bill 1810331 - already processed (1848/2564) 2025-11-14 15:34:57,753 [INFO] Skipping bill 1813358 - already processed (1849/2564) 2025-11-14 15:34:57,753 [INFO] Skipping bill 1657734 - already processed (1850/2564) 2025-11-14 15:34:57,753 [INFO] Processing 1851/2564: Bill ID 1644054 2025-11-14 15:34:59,277 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:34:59,280 [ERROR] Failed to generate report for bill 1644054: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:00,295 [INFO] Processing 1852/2564: Bill ID 1645282 2025-11-14 15:35:01,732 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:01,734 [ERROR] Failed to generate report for bill 1645282: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:02,751 [INFO] Processing 1853/2564: Bill ID 1644063 2025-11-14 15:35:03,610 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:03,612 [ERROR] Failed to generate report for bill 1644063: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:04,630 [INFO] Processing 1854/2564: Bill ID 1645384 2025-11-14 15:35:05,237 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:05,240 [ERROR] Failed to generate report for bill 1645384: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:06,254 [INFO] Processing 1855/2564: Bill ID 1645468 2025-11-14 15:35:07,078 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:07,079 [ERROR] Failed to generate report for bill 1645468: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:08,094 [INFO] Processing 1856/2564: Bill ID 1796787 2025-11-14 15:35:09,775 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:09,778 [ERROR] Failed to generate report for bill 1796787: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:10,794 [INFO] Processing 1857/2564: Bill ID 1643905 2025-11-14 15:35:11,677 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:11,679 [ERROR] Failed to generate report for bill 1643905: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:12,696 [INFO] Processing 1858/2564: Bill ID 1796722 2025-11-14 15:35:14,387 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:14,389 [ERROR] Failed to generate report for bill 1796722: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:15,406 [INFO] Skipping bill 1952329 - already processed (1859/2564) 2025-11-14 15:35:15,407 [INFO] Skipping bill 1964254 - already processed (1860/2564) 2025-11-14 15:35:15,408 [INFO] Skipping bill 1904212 - already processed (1861/2564) 2025-11-14 15:35:15,408 [INFO] Skipping bill 1903879 - already processed (1862/2564) 2025-11-14 15:35:15,408 [INFO] Skipping bill 1930459 - already processed (1863/2564) 2025-11-14 15:35:15,408 [INFO] Skipping bill 1938736 - already processed (1864/2564) 2025-11-14 15:35:15,408 [INFO] Skipping bill 1941657 - already processed (1865/2564) 2025-11-14 15:35:15,408 [INFO] Skipping bill 1932498 - already processed (1866/2564) 2025-11-14 15:35:15,409 [INFO] Skipping bill 1898840 - already processed (1867/2564) 2025-11-14 15:35:15,409 [INFO] Skipping bill 1903962 - already processed (1868/2564) 2025-11-14 15:35:15,409 [INFO] Skipping bill 1943677 - already processed (1869/2564) 2025-11-14 15:35:15,409 [INFO] Skipping bill 1911202 - already processed (1870/2564) 2025-11-14 15:35:15,409 [INFO] Skipping bill 1898343 - already processed (1871/2564) 2025-11-14 15:35:15,409 [INFO] Skipping bill 1930701 - already processed (1872/2564) 2025-11-14 15:35:15,409 [INFO] Skipping bill 1911699 - already processed (1873/2564) 2025-11-14 15:35:15,409 [INFO] Skipping bill 1985707 - already processed (1874/2564) 2025-11-14 15:35:15,410 [INFO] Skipping bill 2025140 - already processed (1875/2564) 2025-11-14 15:35:15,410 [INFO] Processing 1876/2564: Bill ID 1916784 2025-11-14 15:35:16,414 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:16,415 [ERROR] Failed to generate report for bill 1916784: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:17,428 [INFO] Processing 1877/2564: Bill ID 1908012 2025-11-14 15:35:18,855 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:18,857 [ERROR] Failed to generate report for bill 1908012: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:19,874 [INFO] Processing 1878/2564: Bill ID 1907961 2025-11-14 15:35:21,259 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:21,262 [ERROR] Failed to generate report for bill 1907961: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:22,279 [INFO] Processing 1879/2564: Bill ID 1907826 2025-11-14 15:35:23,082 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:23,084 [ERROR] Failed to generate report for bill 1907826: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:24,099 [INFO] Processing 1880/2564: Bill ID 2023840 2025-11-14 15:35:27,248 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:27,251 [ERROR] Failed to generate report for bill 2023840: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:27,309 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:35:27,310 [INFO] Progress: 1880/2564 - Processed: 0, Skipped: 1790, Errors: 90 2025-11-14 15:35:28,320 [INFO] Processing 1881/2564: Bill ID 1907778 2025-11-14 15:35:29,480 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:29,482 [ERROR] Failed to generate report for bill 1907778: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:30,500 [INFO] Skipping bill 1691917 - already processed (1882/2564) 2025-11-14 15:35:30,501 [INFO] Skipping bill 1695960 - already processed (1883/2564) 2025-11-14 15:35:30,501 [INFO] Skipping bill 1850601 - already processed (1884/2564) 2025-11-14 15:35:30,502 [INFO] Skipping bill 1838098 - already processed (1885/2564) 2025-11-14 15:35:30,502 [INFO] Skipping bill 1842521 - already processed (1886/2564) 2025-11-14 15:35:30,502 [INFO] Skipping bill 1809518 - already processed (1887/2564) 2025-11-14 15:35:30,502 [INFO] Skipping bill 1839623 - already processed (1888/2564) 2025-11-14 15:35:30,502 [INFO] Skipping bill 1836854 - already processed (1889/2564) 2025-11-14 15:35:30,502 [INFO] Skipping bill 1828203 - already processed (1890/2564) 2025-11-14 15:35:30,502 [INFO] Skipping bill 1823415 - already processed (1891/2564) 2025-11-14 15:35:30,502 [INFO] Processing 1892/2564: Bill ID 1809702 2025-11-14 15:35:31,620 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:31,621 [ERROR] Failed to generate report for bill 1809702: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:32,639 [INFO] Processing 1893/2564: Bill ID 1812739 2025-11-14 15:35:35,065 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:35,068 [ERROR] Failed to generate report for bill 1812739: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:36,076 [INFO] Skipping bill 1993190 - already processed (1894/2564) 2025-11-14 15:35:36,080 [INFO] Skipping bill 2009723 - already processed (1895/2564) 2025-11-14 15:35:36,080 [INFO] Skipping bill 1970932 - already processed (1896/2564) 2025-11-14 15:35:36,080 [INFO] Skipping bill 1990795 - already processed (1897/2564) 2025-11-14 15:35:36,080 [INFO] Skipping bill 1966877 - already processed (1898/2564) 2025-11-14 15:35:36,080 [INFO] Skipping bill 1972008 - already processed (1899/2564) 2025-11-14 15:35:36,080 [INFO] Skipping bill 1994548 - already processed (1900/2564) 2025-11-14 15:35:36,080 [INFO] Skipping bill 1991745 - already processed (1901/2564) 2025-11-14 15:35:36,080 [INFO] Skipping bill 2010818 - already processed (1902/2564) 2025-11-14 15:35:36,081 [INFO] Skipping bill 2003316 - already processed (1903/2564) 2025-11-14 15:35:36,081 [INFO] Skipping bill 2021830 - already processed (1904/2564) 2025-11-14 15:35:36,081 [INFO] Skipping bill 2009667 - already processed (1905/2564) 2025-11-14 15:35:36,082 [INFO] Skipping bill 2011559 - already processed (1906/2564) 2025-11-14 15:35:36,082 [INFO] Skipping bill 1981081 - already processed (1907/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1990559 - already processed (1908/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1968858 - already processed (1909/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1841344 - already processed (1910/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1837111 - already processed (1911/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1783445 - already processed (1912/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1854251 - already processed (1913/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1867071 - already processed (1914/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1782940 - already processed (1915/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1780646 - already processed (1916/2564) 2025-11-14 15:35:36,083 [INFO] Skipping bill 1781005 - already processed (1917/2564) 2025-11-14 15:35:36,083 [INFO] Processing 1918/2564: Bill ID 1709614 2025-11-14 15:35:39,368 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:39,369 [ERROR] Failed to generate report for bill 1709614: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:40,387 [INFO] Processing 1919/2564: Bill ID 1709655 2025-11-14 15:35:43,345 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:43,347 [ERROR] Failed to generate report for bill 1709655: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:44,363 [INFO] Skipping bill 2034598 - already processed (1920/2564) 2025-11-14 15:35:44,364 [INFO] Skipping bill 2034722 - already processed (1921/2564) 2025-11-14 15:35:44,364 [INFO] Skipping bill 2038518 - already processed (1922/2564) 2025-11-14 15:35:44,364 [INFO] Skipping bill 2039752 - already processed (1923/2564) 2025-11-14 15:35:44,364 [INFO] Skipping bill 2042614 - already processed (1924/2564) 2025-11-14 15:35:44,365 [INFO] Skipping bill 2044087 - already processed (1925/2564) 2025-11-14 15:35:44,365 [INFO] Skipping bill 2045155 - already processed (1926/2564) 2025-11-14 15:35:44,365 [INFO] Skipping bill 2045662 - already processed (1927/2564) 2025-11-14 15:35:44,365 [INFO] Processing 1928/2564: Bill ID 1974122 2025-11-14 15:35:47,456 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:47,458 [ERROR] Failed to generate report for bill 1974122: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:48,476 [INFO] Processing 1929/2564: Bill ID 1974279 2025-11-14 15:35:52,317 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:52,320 [ERROR] Failed to generate report for bill 1974279: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:53,333 [INFO] Skipping bill 1842729 - already processed (1930/2564) 2025-11-14 15:35:53,334 [INFO] Skipping bill 1842887 - already processed (1931/2564) 2025-11-14 15:35:53,334 [INFO] Skipping bill 1939111 - already processed (1932/2564) 2025-11-14 15:35:53,334 [INFO] Skipping bill 1895001 - already processed (1933/2564) 2025-11-14 15:35:53,334 [INFO] Skipping bill 1945993 - already processed (1934/2564) 2025-11-14 15:35:53,334 [INFO] Skipping bill 1945813 - already processed (1935/2564) 2025-11-14 15:35:53,334 [INFO] Skipping bill 1774433 - already processed (1936/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1884990 - already processed (1937/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1882572 - already processed (1938/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1784131 - already processed (1939/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1873726 - already processed (1940/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1882205 - already processed (1941/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1860116 - already processed (1942/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1835790 - already processed (1943/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1835624 - already processed (1944/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1876647 - already processed (1945/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1887447 - already processed (1946/2564) 2025-11-14 15:35:53,335 [INFO] Skipping bill 1898165 - already processed (1947/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1780760 - already processed (1948/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1887744 - already processed (1949/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1782128 - already processed (1950/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1887739 - already processed (1951/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1885322 - already processed (1952/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1887646 - already processed (1953/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1897119 - already processed (1954/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1782539 - already processed (1955/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1880117 - already processed (1956/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1810734 - already processed (1957/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1887671 - already processed (1958/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1883053 - already processed (1959/2564) 2025-11-14 15:35:53,336 [INFO] Skipping bill 1861062 - already processed (1960/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1775461 - already processed (1961/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1792331 - already processed (1962/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1765384 - already processed (1963/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1863023 - already processed (1964/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1883034 - already processed (1965/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1886748 - already processed (1966/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1886756 - already processed (1967/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1885278 - already processed (1968/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1784087 - already processed (1969/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1886439 - already processed (1970/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1877586 - already processed (1971/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1888775 - already processed (1972/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1773844 - already processed (1973/2564) 2025-11-14 15:35:53,337 [INFO] Skipping bill 1857956 - already processed (1974/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1775721 - already processed (1975/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1861016 - already processed (1976/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1884504 - already processed (1977/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1892975 - already processed (1978/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1886714 - already processed (1979/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1877214 - already processed (1980/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1779520 - already processed (1981/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1882161 - already processed (1982/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1793734 - already processed (1983/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1885501 - already processed (1984/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1887169 - already processed (1985/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1877680 - already processed (1986/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1887282 - already processed (1987/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1774766 - already processed (1988/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1774961 - already processed (1989/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1866654 - already processed (1990/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1779127 - already processed (1991/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1882224 - already processed (1992/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1892198 - already processed (1993/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1759862 - already processed (1994/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1888377 - already processed (1995/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1894701 - already processed (1996/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1864751 - already processed (1997/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1772453 - already processed (1998/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1885309 - already processed (1999/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1886447 - already processed (2000/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1848736 - already processed (2001/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1884301 - already processed (2002/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1881976 - already processed (2003/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1885426 - already processed (2004/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1775334 - already processed (2005/2564) 2025-11-14 15:35:53,338 [INFO] Skipping bill 1884442 - already processed (2006/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1881980 - already processed (2007/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1893238 - already processed (2008/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1865594 - already processed (2009/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1872732 - already processed (2010/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1885341 - already processed (2011/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1764018 - already processed (2012/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1887315 - already processed (2013/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1751404 - already processed (2014/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1888249 - already processed (2015/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1885249 - already processed (2016/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1881398 - already processed (2017/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1866637 - already processed (2018/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1770194 - already processed (2019/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1775580 - already processed (2020/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1784705 - already processed (2021/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1831382 - already processed (2022/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1885274 - already processed (2023/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1892393 - already processed (2024/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1877691 - already processed (2025/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1776083 - already processed (2026/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1760978 - already processed (2027/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1764682 - already processed (2028/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1880344 - already processed (2029/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1886698 - already processed (2030/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1876488 - already processed (2031/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1765330 - already processed (2032/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1887359 - already processed (2033/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1771744 - already processed (2034/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1831359 - already processed (2035/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1774102 - already processed (2036/2564) 2025-11-14 15:35:53,339 [INFO] Skipping bill 1774479 - already processed (2037/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1794846 - already processed (2038/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1894867 - already processed (2039/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1774859 - already processed (2040/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1884522 - already processed (2041/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1866979 - already processed (2042/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1886705 - already processed (2043/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1898170 - already processed (2044/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1885330 - already processed (2045/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1792286 - already processed (2046/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1892877 - already processed (2047/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1884177 - already processed (2048/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1774713 - already processed (2049/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1774626 - already processed (2050/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1884513 - already processed (2051/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1887362 - already processed (2052/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1893236 - already processed (2053/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1883668 - already processed (2054/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1831371 - already processed (2055/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1885671 - already processed (2056/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1885535 - already processed (2057/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1888766 - already processed (2058/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1892506 - already processed (2059/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1892532 - already processed (2060/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1878820 - already processed (2061/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1884926 - already processed (2062/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1895881 - already processed (2063/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1778284 - already processed (2064/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1770920 - already processed (2065/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1650801 - already processed (2066/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1883378 - already processed (2067/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1683970 - already processed (2068/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1772792 - already processed (2069/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1759623 - already processed (2070/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1760525 - already processed (2071/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1862531 - already processed (2072/2564) 2025-11-14 15:35:53,340 [INFO] Skipping bill 1767461 - already processed (2073/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1776485 - already processed (2074/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1871231 - already processed (2075/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1887711 - already processed (2076/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1893243 - already processed (2077/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1701254 - already processed (2078/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1897456 - already processed (2079/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1775615 - already processed (2080/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1794843 - already processed (2081/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1810720 - already processed (2082/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1894308 - already processed (2083/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1894683 - already processed (2084/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1842456 - already processed (2085/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1885281 - already processed (2086/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1759897 - already processed (2087/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1860079 - already processed (2088/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1746098 - already processed (2089/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1897489 - already processed (2090/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1887287 - already processed (2091/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1885252 - already processed (2092/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1892936 - already processed (2093/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1732925 - already processed (2094/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1746069 - already processed (2095/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1774408 - already processed (2096/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1772182 - already processed (2097/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1884422 - already processed (2098/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1687118 - already processed (2099/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1784726 - already processed (2100/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1762912 - already processed (2101/2564) 2025-11-14 15:35:53,341 [INFO] Skipping bill 1898405 - already processed (2102/2564) 2025-11-14 15:35:53,341 [INFO] Processing 2103/2564: Bill ID 1884189 2025-11-14 15:35:55,588 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:55,590 [ERROR] Failed to generate report for bill 1884189: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:56,607 [INFO] Skipping bill 1899847 - already processed (2104/2564) 2025-11-14 15:35:56,608 [INFO] Skipping bill 1732984 - already processed (2105/2564) 2025-11-14 15:35:56,608 [INFO] Skipping bill 1746089 - already processed (2106/2564) 2025-11-14 15:35:56,608 [INFO] Skipping bill 1766726 - already processed (2107/2564) 2025-11-14 15:35:56,608 [INFO] Skipping bill 1769804 - already processed (2108/2564) 2025-11-14 15:35:56,608 [INFO] Skipping bill 1897097 - already processed (2109/2564) 2025-11-14 15:35:56,608 [INFO] Processing 2110/2564: Bill ID 1774177 2025-11-14 15:35:58,254 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:35:58,256 [ERROR] Failed to generate report for bill 1774177: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:35:58,317 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:35:58,318 [INFO] Progress: 2110/2564 - Processed: 0, Skipped: 2011, Errors: 99 2025-11-14 15:35:59,328 [INFO] Skipping bill 1757049 - already processed (2111/2564) 2025-11-14 15:35:59,329 [INFO] Skipping bill 1784298 - already processed (2112/2564) 2025-11-14 15:35:59,329 [INFO] Skipping bill 1785108 - already processed (2113/2564) 2025-11-14 15:35:59,329 [INFO] Skipping bill 1772128 - already processed (2114/2564) 2025-11-14 15:35:59,329 [INFO] Skipping bill 1879910 - already processed (2115/2564) 2025-11-14 15:35:59,329 [INFO] Skipping bill 1777717 - already processed (2116/2564) 2025-11-14 15:35:59,329 [INFO] Skipping bill 1843401 - already processed (2117/2564) 2025-11-14 15:35:59,329 [INFO] Skipping bill 1774203 - already processed (2118/2564) 2025-11-14 15:35:59,330 [INFO] Skipping bill 1892268 - already processed (2119/2564) 2025-11-14 15:35:59,330 [INFO] Skipping bill 1774216 - already processed (2120/2564) 2025-11-14 15:35:59,330 [INFO] Skipping bill 1868870 - already processed (2121/2564) 2025-11-14 15:35:59,330 [INFO] Skipping bill 1770792 - already processed (2122/2564) 2025-11-14 15:35:59,330 [INFO] Skipping bill 1894823 - already processed (2123/2564) 2025-11-14 15:35:59,330 [INFO] Skipping bill 1885629 - already processed (2124/2564) 2025-11-14 15:35:59,330 [INFO] Skipping bill 1866980 - already processed (2125/2564) 2025-11-14 15:35:59,330 [INFO] Skipping bill 1826236 - already processed (2126/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1860115 - already processed (2127/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1767424 - already processed (2128/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1877069 - already processed (2129/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1865576 - already processed (2130/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1771076 - already processed (2131/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1755580 - already processed (2132/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1885029 - already processed (2133/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1770955 - already processed (2134/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1772617 - already processed (2135/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1760193 - already processed (2136/2564) 2025-11-14 15:35:59,331 [INFO] Skipping bill 1871212 - already processed (2137/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1887934 - already processed (2138/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1879177 - already processed (2139/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1897536 - already processed (2140/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1854133 - already processed (2141/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1761508 - already processed (2142/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1777284 - already processed (2143/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1774079 - already processed (2144/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1896271 - already processed (2145/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1897312 - already processed (2146/2564) 2025-11-14 15:35:59,332 [INFO] Skipping bill 1774750 - already processed (2147/2564) 2025-11-14 15:35:59,333 [INFO] Skipping bill 1873661 - already processed (2148/2564) 2025-11-14 15:35:59,333 [INFO] Skipping bill 1782516 - already processed (2149/2564) 2025-11-14 15:35:59,334 [INFO] Skipping bill 1782446 - already processed (2150/2564) 2025-11-14 15:35:59,334 [INFO] Skipping bill 1866649 - already processed (2151/2564) 2025-11-14 15:35:59,334 [INFO] Skipping bill 1866664 - already processed (2152/2564) 2025-11-14 15:35:59,334 [INFO] Skipping bill 1707867 - already processed (2153/2564) 2025-11-14 15:35:59,334 [INFO] Skipping bill 1872167 - already processed (2154/2564) 2025-11-14 15:35:59,334 [INFO] Skipping bill 1759875 - already processed (2155/2564) 2025-11-14 15:35:59,334 [INFO] Skipping bill 1789214 - already processed (2156/2564) 2025-11-14 15:35:59,334 [INFO] Skipping bill 1872153 - already processed (2157/2564) 2025-11-14 15:35:59,335 [INFO] Skipping bill 1760229 - already processed (2158/2564) 2025-11-14 15:35:59,335 [INFO] Skipping bill 1774942 - already processed (2159/2564) 2025-11-14 15:35:59,335 [INFO] Skipping bill 1694059 - already processed (2160/2564) 2025-11-14 15:35:59,335 [INFO] Skipping bill 1829219 - already processed (2161/2564) 2025-11-14 15:35:59,335 [INFO] Skipping bill 1679271 - already processed (2162/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1883365 - already processed (2163/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1780777 - already processed (2164/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1707919 - already processed (2165/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1860113 - already processed (2166/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1781933 - already processed (2167/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1751388 - already processed (2168/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1754500 - already processed (2169/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1772123 - already processed (2170/2564) 2025-11-14 15:35:59,336 [INFO] Skipping bill 1892924 - already processed (2171/2564) 2025-11-14 15:35:59,337 [INFO] Skipping bill 1778422 - already processed (2172/2564) 2025-11-14 15:35:59,337 [INFO] Skipping bill 1897294 - already processed (2173/2564) 2025-11-14 15:35:59,337 [INFO] Skipping bill 1769557 - already processed (2174/2564) 2025-11-14 15:35:59,337 [INFO] Skipping bill 1747003 - already processed (2175/2564) 2025-11-14 15:35:59,337 [INFO] Skipping bill 1775420 - already processed (2176/2564) 2025-11-14 15:35:59,337 [INFO] Skipping bill 1885460 - already processed (2177/2564) 2025-11-14 15:35:59,337 [INFO] Skipping bill 1778494 - already processed (2178/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1778507 - already processed (2179/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1746072 - already processed (2180/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1747808 - already processed (2181/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1764055 - already processed (2182/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1765960 - already processed (2183/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1766587 - already processed (2184/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1766736 - already processed (2185/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1771518 - already processed (2186/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1772577 - already processed (2187/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1772933 - already processed (2188/2564) 2025-11-14 15:35:59,338 [INFO] Skipping bill 1773303 - already processed (2189/2564) 2025-11-14 15:35:59,339 [INFO] Skipping bill 1775354 - already processed (2190/2564) 2025-11-14 15:35:59,339 [INFO] Skipping bill 1777649 - already processed (2191/2564) 2025-11-14 15:35:59,339 [INFO] Skipping bill 1783786 - already processed (2192/2564) 2025-11-14 15:35:59,339 [INFO] Skipping bill 1783927 - already processed (2193/2564) 2025-11-14 15:35:59,339 [INFO] Skipping bill 1791735 - already processed (2194/2564) 2025-11-14 15:35:59,339 [INFO] Skipping bill 1791984 - already processed (2195/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1860914 - already processed (2196/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1874964 - already processed (2197/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1876702 - already processed (2198/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1878298 - already processed (2199/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1878970 - already processed (2200/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1878883 - already processed (2201/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1880262 - already processed (2202/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1880301 - already processed (2203/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1880312 - already processed (2204/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1882770 - already processed (2205/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1889897 - already processed (2206/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1892711 - already processed (2207/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1897258 - already processed (2208/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1881528 - already processed (2209/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1782893 - already processed (2210/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1834554 - already processed (2211/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1774082 - already processed (2212/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1783631 - already processed (2213/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1879351 - already processed (2214/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1707921 - already processed (2215/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1872751 - already processed (2216/2564) 2025-11-14 15:35:59,340 [INFO] Skipping bill 1848738 - already processed (2217/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1882577 - already processed (2218/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1880072 - already processed (2219/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1880345 - already processed (2220/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1892804 - already processed (2221/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1860940 - already processed (2222/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1766003 - already processed (2223/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1775441 - already processed (2224/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1758619 - already processed (2225/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1894461 - already processed (2226/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1778171 - already processed (2227/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1778004 - already processed (2228/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1832839 - already processed (2229/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1774844 - already processed (2230/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1751449 - already processed (2231/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1751346 - already processed (2232/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1759080 - already processed (2233/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1882756 - already processed (2234/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1882766 - already processed (2235/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1887196 - already processed (2236/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1889949 - already processed (2237/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1887718 - already processed (2238/2564) 2025-11-14 15:35:59,341 [INFO] Skipping bill 1896232 - already processed (2239/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1783562 - already processed (2240/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1681772 - already processed (2241/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1871711 - already processed (2242/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1874986 - already processed (2243/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1772204 - already processed (2244/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1884912 - already processed (2245/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1888175 - already processed (2246/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1832721 - already processed (2247/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1887649 - already processed (2248/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1887704 - already processed (2249/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1881672 - already processed (2250/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1777454 - already processed (2251/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1882397 - already processed (2252/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1766671 - already processed (2253/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1775036 - already processed (2254/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1694305 - already processed (2255/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1863407 - already processed (2256/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1746051 - already processed (2257/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1882537 - already processed (2258/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1873551 - already processed (2259/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1762960 - already processed (2260/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1887303 - already processed (2261/2564) 2025-11-14 15:35:59,342 [INFO] Skipping bill 1887118 - already processed (2262/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1775679 - already processed (2263/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1882373 - already processed (2264/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1862520 - already processed (2265/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1886817 - already processed (2266/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1750558 - already processed (2267/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1750336 - already processed (2268/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1694173 - already processed (2269/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1864746 - already processed (2270/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1887915 - already processed (2271/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1774093 - already processed (2272/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1650659 - already processed (2273/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1694050 - already processed (2274/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1771092 - already processed (2275/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1876599 - already processed (2276/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1835788 - already processed (2277/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1782691 - already processed (2278/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1876668 - already processed (2279/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1729737 - already processed (2280/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1766627 - already processed (2281/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1885388 - already processed (2282/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1887130 - already processed (2283/2564) 2025-11-14 15:35:59,343 [INFO] Skipping bill 1775597 - already processed (2284/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1793999 - already processed (2285/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1789198 - already processed (2286/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1888330 - already processed (2287/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1882746 - already processed (2288/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1694182 - already processed (2289/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1860920 - already processed (2290/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1774448 - already processed (2291/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1774405 - already processed (2292/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1876990 - already processed (2293/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1876679 - already processed (2294/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1881973 - already processed (2295/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1717622 - already processed (2296/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1885510 - already processed (2297/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1871269 - already processed (2298/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1774266 - already processed (2299/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1785924 - already processed (2300/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1779428 - already processed (2301/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1775195 - already processed (2302/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1775134 - already processed (2303/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1743524 - already processed (2304/2564) 2025-11-14 15:35:59,344 [INFO] Skipping bill 1757473 - already processed (2305/2564) 2025-11-14 15:35:59,345 [INFO] Processing 2306/2564: Bill ID 1857970 2025-11-14 15:36:00,254 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:00,258 [ERROR] Failed to generate report for bill 1857970: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:01,273 [INFO] Skipping bill 1883678 - already processed (2307/2564) 2025-11-14 15:36:01,274 [INFO] Processing 2308/2564: Bill ID 1897245 2025-11-14 15:36:02,953 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:02,956 [ERROR] Failed to generate report for bill 1897245: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:03,975 [INFO] Skipping bill 1894517 - already processed (2309/2564) 2025-11-14 15:36:03,975 [INFO] Processing 2310/2564: Bill ID 1898241 2025-11-14 15:36:05,037 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:05,039 [ERROR] Failed to generate report for bill 1898241: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:05,101 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:36:05,101 [INFO] Progress: 2310/2564 - Processed: 0, Skipped: 2208, Errors: 102 2025-11-14 15:36:06,112 [INFO] Processing 2311/2564: Bill ID 1879854 2025-11-14 15:36:08,031 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:08,033 [ERROR] Failed to generate report for bill 1879854: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:09,048 [INFO] Skipping bill 1888278 - already processed (2312/2564) 2025-11-14 15:36:09,049 [INFO] Skipping bill 1879169 - already processed (2313/2564) 2025-11-14 15:36:09,049 [INFO] Skipping bill 1860989 - already processed (2314/2564) 2025-11-14 15:36:09,049 [INFO] Skipping bill 1758024 - already processed (2315/2564) 2025-11-14 15:36:09,050 [INFO] Skipping bill 1863932 - already processed (2316/2564) 2025-11-14 15:36:09,050 [INFO] Processing 2317/2564: Bill ID 1771174 2025-11-14 15:36:10,293 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:10,294 [ERROR] Failed to generate report for bill 1771174: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:11,311 [INFO] Skipping bill 1772600 - already processed (2318/2564) 2025-11-14 15:36:11,312 [INFO] Skipping bill 1760911 - already processed (2319/2564) 2025-11-14 15:36:11,312 [INFO] Skipping bill 1789291 - already processed (2320/2564) 2025-11-14 15:36:11,312 [INFO] Skipping bill 1764694 - already processed (2321/2564) 2025-11-14 15:36:11,312 [INFO] Skipping bill 1764770 - already processed (2322/2564) 2025-11-14 15:36:11,312 [INFO] Skipping bill 1884949 - already processed (2323/2564) 2025-11-14 15:36:11,313 [INFO] Processing 2324/2564: Bill ID 1897528 2025-11-14 15:36:12,246 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:12,247 [ERROR] Failed to generate report for bill 1897528: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:13,263 [INFO] Processing 2325/2564: Bill ID 1898192 2025-11-14 15:36:13,726 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:13,729 [ERROR] Failed to generate report for bill 1898192: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:14,745 [INFO] Skipping bill 1774988 - already processed (2326/2564) 2025-11-14 15:36:14,746 [INFO] Processing 2327/2564: Bill ID 1892419 2025-11-14 15:36:16,822 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:16,824 [ERROR] Failed to generate report for bill 1892419: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:17,841 [INFO] Processing 2328/2564: Bill ID 1884946 2025-11-14 15:36:20,141 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:20,144 [ERROR] Failed to generate report for bill 1884946: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:21,161 [INFO] Processing 2329/2564: Bill ID 1885067 2025-11-14 15:36:23,161 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:23,164 [ERROR] Failed to generate report for bill 1885067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:24,177 [INFO] Skipping bill 1879669 - already processed (2330/2564) 2025-11-14 15:36:24,178 [INFO] Processing 2331/2564: Bill ID 1897089 2025-11-14 15:36:24,841 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:24,842 [ERROR] Failed to generate report for bill 1897089: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:25,854 [INFO] Skipping bill 2041135 - already processed (2332/2564) 2025-11-14 15:36:25,855 [INFO] Skipping bill 2037217 - already processed (2333/2564) 2025-11-14 15:36:25,855 [INFO] Skipping bill 2022578 - already processed (2334/2564) 2025-11-14 15:36:25,855 [INFO] Skipping bill 2045360 - already processed (2335/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 2044380 - already processed (2336/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 2040591 - already processed (2337/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 2044133 - already processed (2338/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 2040128 - already processed (2339/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 2022459 - already processed (2340/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 2046890 - already processed (2341/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 1948171 - already processed (2342/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 2029224 - already processed (2343/2564) 2025-11-14 15:36:25,856 [INFO] Skipping bill 2044676 - already processed (2344/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2041169 - already processed (2345/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2015628 - already processed (2346/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2029917 - already processed (2347/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2029601 - already processed (2348/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 1988067 - already processed (2349/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 1964814 - already processed (2350/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2043727 - already processed (2351/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 1988016 - already processed (2352/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2037684 - already processed (2353/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2029576 - already processed (2354/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2043072 - already processed (2355/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2008640 - already processed (2356/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2042761 - already processed (2357/2564) 2025-11-14 15:36:25,857 [INFO] Skipping bill 2043628 - already processed (2358/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 1987991 - already processed (2359/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2039925 - already processed (2360/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 1990438 - already processed (2361/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2014950 - already processed (2362/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2046871 - already processed (2363/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2008541 - already processed (2364/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2019807 - already processed (2365/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2032195 - already processed (2366/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2032174 - already processed (2367/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2045181 - already processed (2368/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2035367 - already processed (2369/2564) 2025-11-14 15:36:25,858 [INFO] Skipping bill 2022504 - already processed (2370/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2040216 - already processed (2371/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2038243 - already processed (2372/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2038240 - already processed (2373/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 1958579 - already processed (2374/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2041151 - already processed (2375/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2040068 - already processed (2376/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2035878 - already processed (2377/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2043698 - already processed (2378/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2043764 - already processed (2379/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2034541 - already processed (2380/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2036108 - already processed (2381/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2036914 - already processed (2382/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2032053 - already processed (2383/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2032068 - already processed (2384/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2045357 - already processed (2385/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2043047 - already processed (2386/2564) 2025-11-14 15:36:25,859 [INFO] Skipping bill 2040306 - already processed (2387/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 1916986 - already processed (2388/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2039821 - already processed (2389/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2046891 - already processed (2390/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2040880 - already processed (2391/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2040851 - already processed (2392/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2043722 - already processed (2393/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 1987950 - already processed (2394/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2040439 - already processed (2395/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 1901865 - already processed (2396/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 1905283 - already processed (2397/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2042107 - already processed (2398/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2044713 - already processed (2399/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2041468 - already processed (2400/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 1983900 - already processed (2401/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2020217 - already processed (2402/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2038216 - already processed (2403/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2043604 - already processed (2404/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2045365 - already processed (2405/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 1986270 - already processed (2406/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2043961 - already processed (2407/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2044138 - already processed (2408/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2040354 - already processed (2409/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 1984221 - already processed (2410/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2033224 - already processed (2411/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2033186 - already processed (2412/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 1970505 - already processed (2413/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2036132 - already processed (2414/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2033542 - already processed (2415/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2027361 - already processed (2416/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2040866 - already processed (2417/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2041757 - already processed (2418/2564) 2025-11-14 15:36:25,860 [INFO] Skipping bill 2043357 - already processed (2419/2564) 2025-11-14 15:36:25,861 [INFO] Skipping bill 2042653 - already processed (2420/2564) 2025-11-14 15:36:25,861 [INFO] Skipping bill 2043161 - already processed (2421/2564) 2025-11-14 15:36:25,861 [INFO] Skipping bill 1965963 - already processed (2422/2564) 2025-11-14 15:36:25,861 [INFO] Skipping bill 2045735 - already processed (2423/2564) 2025-11-14 15:36:25,861 [INFO] Skipping bill 1999388 - already processed (2424/2564) 2025-11-14 15:36:25,861 [INFO] Processing 2425/2564: Bill ID 2039530 2025-11-14 15:36:27,795 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:27,797 [ERROR] Failed to generate report for bill 2039530: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:28,814 [INFO] Skipping bill 1970493 - already processed (2426/2564) 2025-11-14 15:36:28,815 [INFO] Skipping bill 2037978 - already processed (2427/2564) 2025-11-14 15:36:28,815 [INFO] Skipping bill 2038111 - already processed (2428/2564) 2025-11-14 15:36:28,815 [INFO] Skipping bill 2040318 - already processed (2429/2564) 2025-11-14 15:36:28,816 [INFO] Skipping bill 2041104 - already processed (2430/2564) 2025-11-14 15:36:28,816 [INFO] Skipping bill 2043947 - already processed (2431/2564) 2025-11-14 15:36:28,816 [INFO] Skipping bill 1982722 - already processed (2432/2564) 2025-11-14 15:36:28,816 [INFO] Skipping bill 2043896 - already processed (2433/2564) 2025-11-14 15:36:28,816 [INFO] Skipping bill 2012870 - already processed (2434/2564) 2025-11-14 15:36:28,816 [INFO] Skipping bill 2007066 - already processed (2435/2564) 2025-11-14 15:36:28,816 [INFO] Skipping bill 1968860 - already processed (2436/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 2029307 - already processed (2437/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 2036439 - already processed (2438/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 2041255 - already processed (2439/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 2043715 - already processed (2440/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 2033191 - already processed (2441/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 1968282 - already processed (2442/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 2039688 - already processed (2443/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 2038212 - already processed (2444/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 1987966 - already processed (2445/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 2031847 - already processed (2446/2564) 2025-11-14 15:36:28,817 [INFO] Skipping bill 1970497 - already processed (2447/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 1963353 - already processed (2448/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2046183 - already processed (2449/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2005587 - already processed (2450/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2039178 - already processed (2451/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2041269 - already processed (2452/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2043688 - already processed (2453/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 1927158 - already processed (2454/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 1987972 - already processed (2455/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2035895 - already processed (2456/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2037256 - already processed (2457/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2043043 - already processed (2458/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2031888 - already processed (2459/2564) 2025-11-14 15:36:28,818 [INFO] Skipping bill 2043344 - already processed (2460/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 2043890 - already processed (2461/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 1936780 - already processed (2462/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 2022467 - already processed (2463/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 2022582 - already processed (2464/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 2023141 - already processed (2465/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 1988006 - already processed (2466/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 1970488 - already processed (2467/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 1933954 - already processed (2468/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 1955921 - already processed (2469/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 1963338 - already processed (2470/2564) 2025-11-14 15:36:28,819 [INFO] Skipping bill 2015697 - already processed (2471/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2020008 - already processed (2472/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2021940 - already processed (2473/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2022593 - already processed (2474/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2026569 - already processed (2475/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2027464 - already processed (2476/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2018800 - already processed (2477/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2028784 - already processed (2478/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2029580 - already processed (2479/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2031938 - already processed (2480/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2032128 - already processed (2481/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 1947775 - already processed (2482/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2035420 - already processed (2483/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2037229 - already processed (2484/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2039570 - already processed (2485/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2042103 - already processed (2486/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2043758 - already processed (2487/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2046719 - already processed (2488/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 1979616 - already processed (2489/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2019782 - already processed (2490/2564) 2025-11-14 15:36:28,820 [INFO] Skipping bill 2017847 - already processed (2491/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2018869 - already processed (2492/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2040352 - already processed (2493/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2029980 - already processed (2494/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2018578 - already processed (2495/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2043696 - already processed (2496/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2008600 - already processed (2497/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2037247 - already processed (2498/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2037249 - already processed (2499/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2035609 - already processed (2500/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2038921 - already processed (2501/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2021715 - already processed (2502/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2021641 - already processed (2503/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 1901818 - already processed (2504/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2023062 - already processed (2505/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2044841 - already processed (2506/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2038257 - already processed (2507/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2043173 - already processed (2508/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 1948187 - already processed (2509/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 1941772 - already processed (2510/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2037277 - already processed (2511/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2043199 - already processed (2512/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2041162 - already processed (2513/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2038970 - already processed (2514/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2039918 - already processed (2515/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2032140 - already processed (2516/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2029941 - already processed (2517/2564) 2025-11-14 15:36:28,821 [INFO] Skipping bill 2038420 - already processed (2518/2564) 2025-11-14 15:36:28,822 [INFO] Skipping bill 1943770 - already processed (2519/2564) 2025-11-14 15:36:28,822 [INFO] Skipping bill 1979653 - already processed (2520/2564) 2025-11-14 15:36:28,822 [INFO] Skipping bill 1970677 - already processed (2521/2564) 2025-11-14 15:36:28,822 [INFO] Skipping bill 1988332 - already processed (2522/2564) 2025-11-14 15:36:28,822 [INFO] Skipping bill 1939613 - already processed (2523/2564) 2025-11-14 15:36:28,822 [INFO] Skipping bill 2043104 - already processed (2524/2564) 2025-11-14 15:36:28,822 [INFO] Skipping bill 2000425 - already processed (2525/2564) 2025-11-14 15:36:28,822 [INFO] Skipping bill 2028805 - already processed (2526/2564) 2025-11-14 15:36:28,822 [INFO] Processing 2527/2564: Bill ID 2032901 2025-11-14 15:36:30,363 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:30,364 [ERROR] Failed to generate report for bill 2032901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:31,380 [INFO] Skipping bill 2023111 - already processed (2528/2564) 2025-11-14 15:36:31,381 [INFO] Skipping bill 2036437 - already processed (2529/2564) 2025-11-14 15:36:31,381 [INFO] Skipping bill 2036475 - already processed (2530/2564) 2025-11-14 15:36:31,381 [INFO] Skipping bill 2032059 - already processed (2531/2564) 2025-11-14 15:36:31,381 [INFO] Skipping bill 2007053 - already processed (2532/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 2000456 - already processed (2533/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 2016811 - already processed (2534/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 1958611 - already processed (2535/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 1926891 - already processed (2536/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 1943799 - already processed (2537/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 2039061 - already processed (2538/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 1961580 - already processed (2539/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 1927000 - already processed (2540/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 2023233 - already processed (2541/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 1947802 - already processed (2542/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 2022615 - already processed (2543/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 2022439 - already processed (2544/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 2033390 - already processed (2545/2564) 2025-11-14 15:36:31,382 [INFO] Skipping bill 2023224 - already processed (2546/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2026636 - already processed (2547/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2036925 - already processed (2548/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 1963365 - already processed (2549/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2043448 - already processed (2550/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 1994349 - already processed (2551/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2028140 - already processed (2552/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2032003 - already processed (2553/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2039157 - already processed (2554/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2044179 - already processed (2555/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2035673 - already processed (2556/2564) 2025-11-14 15:36:31,383 [INFO] Skipping bill 2044473 - already processed (2557/2564) 2025-11-14 15:36:31,383 [INFO] Processing 2558/2564: Bill ID 1990400 2025-11-14 15:36:32,164 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:32,165 [ERROR] Failed to generate report for bill 1990400: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:33,181 [INFO] Skipping bill 2027724 - already processed (2559/2564) 2025-11-14 15:36:33,182 [INFO] Processing 2560/2564: Bill ID 2028171 2025-11-14 15:36:33,592 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:33,595 [ERROR] Failed to generate report for bill 2028171: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134487 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134487 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:33,661 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:36:33,662 [INFO] Progress: 2560/2564 - Processed: 0, Skipped: 2446, Errors: 114 2025-11-14 15:36:34,672 [INFO] Processing 2561/2564: Bill ID 1966444 2025-11-14 15:36:35,207 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:35,209 [ERROR] Failed to generate report for bill 1966444: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:36,223 [INFO] Processing 2562/2564: Bill ID 2038906 2025-11-14 15:36:36,838 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:36,840 [ERROR] Failed to generate report for bill 2038906: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:37,856 [INFO] Processing 2563/2564: Bill ID 1994544 2025-11-14 15:36:38,484 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-14 15:36:38,486 [ERROR] Failed to generate report for bill 1994544: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-14 15:36:39,503 [INFO] Skipping bill 2041289 - already processed (2564/2564) 2025-11-14 15:36:39,550 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-14 15:36:39,550 [INFO] Report generation complete! 2025-11-14 15:36:39,551 [INFO] Total bills: 2564 2025-11-14 15:36:39,551 [INFO] Successfully processed: 0 2025-11-14 15:36:39,551 [INFO] Skipped (already done): 2447 2025-11-14 15:36:39,551 [INFO] Errors: 117 2025-11-20 14:00:32,731 [INFO] Loaded 2564 existing reports from data/bill_reports.json 2025-11-20 14:00:32,731 [INFO] Starting report generation for 2596 bills 2025-11-20 14:00:32,731 [INFO] Skipping bill 1769530 - already processed (1/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1765118 - already processed (2/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1745017 - already processed (3/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1745230 - already processed (4/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1847915 - already processed (5/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1847210 - already processed (6/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1847980 - already processed (7/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1840627 - already processed (8/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1840340 - already processed (9/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 2019785 - already processed (10/2596) 2025-11-20 14:00:32,731 [INFO] Skipping bill 1983607 - already processed (11/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 2019702 - already processed (12/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1987220 - already processed (13/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 2022389 - already processed (14/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1959465 - already processed (15/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 2023982 - already processed (16/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 2019732 - already processed (17/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1969654 - already processed (18/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1956622 - already processed (19/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1957166 - already processed (20/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1869518 - already processed (21/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1813560 - already processed (22/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1836190 - already processed (23/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1851112 - already processed (24/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1745943 - already processed (25/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1737840 - already processed (26/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1814309 - already processed (27/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1851143 - already processed (28/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1984991 - already processed (29/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1912439 - already processed (30/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1912476 - already processed (31/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1940708 - already processed (32/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1935103 - already processed (33/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1685926 - already processed (34/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1657717 - already processed (35/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1683096 - already processed (36/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1828964 - already processed (37/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1830782 - already processed (38/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1829010 - already processed (39/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1810349 - already processed (40/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1810356 - already processed (41/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1804209 - already processed (42/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1830673 - already processed (43/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1923768 - already processed (44/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1935042 - already processed (45/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1948089 - already processed (46/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1917064 - already processed (47/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1964274 - already processed (48/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1949161 - already processed (49/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1938396 - already processed (50/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1955446 - already processed (51/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 1946736 - already processed (52/2596) 2025-11-20 14:00:32,732 [INFO] Skipping bill 2037727 - already processed (53/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1730253 - already processed (54/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1721706 - already processed (55/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1975090 - already processed (56/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1946146 - already processed (57/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 2018186 - already processed (58/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 2011735 - already processed (59/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1897622 - already processed (60/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1973543 - already processed (61/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 2009462 - already processed (62/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 2011658 - already processed (63/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1944017 - already processed (64/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1892641 - already processed (65/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 2010078 - already processed (66/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1915632 - already processed (67/2596) 2025-11-20 14:00:32,733 [INFO] Skipping bill 1996393 - already processed (68/2596) 2025-11-20 14:00:32,733 [INFO] Processing 69/2596: Bill ID 1972479 2025-11-20 14:00:34,380 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:34,382 [ERROR] Failed to generate report for bill 1972479: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:35,399 [INFO] Skipping bill 1848589 - already processed (70/2596) 2025-11-20 14:00:35,399 [INFO] Skipping bill 1796695 - already processed (71/2596) 2025-11-20 14:00:35,399 [INFO] Skipping bill 1834299 - already processed (72/2596) 2025-11-20 14:00:35,399 [INFO] Skipping bill 1840453 - already processed (73/2596) 2025-11-20 14:00:35,399 [INFO] Skipping bill 1847401 - already processed (74/2596) 2025-11-20 14:00:35,399 [INFO] Skipping bill 1849339 - already processed (75/2596) 2025-11-20 14:00:35,399 [INFO] Skipping bill 1845122 - already processed (76/2596) 2025-11-20 14:00:35,399 [INFO] Skipping bill 1796692 - already processed (77/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1846289 - already processed (78/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1813231 - already processed (79/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1848433 - already processed (80/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1796691 - already processed (81/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1848536 - already processed (82/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1819737 - already processed (83/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1829037 - already processed (84/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1712200 - already processed (85/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1848424 - already processed (86/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1814918 - already processed (87/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1686429 - already processed (88/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1848359 - already processed (89/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1697069 - already processed (90/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1848453 - already processed (91/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1849513 - already processed (92/2596) 2025-11-20 14:00:35,400 [INFO] Skipping bill 1848521 - already processed (93/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1848425 - already processed (94/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1702816 - already processed (95/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1849367 - already processed (96/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1849520 - already processed (97/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1848530 - already processed (98/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1712027 - already processed (99/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1849659 - already processed (100/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1848478 - already processed (101/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1848387 - already processed (102/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1845137 - already processed (103/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1812205 - already processed (104/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1798416 - already processed (105/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1847351 - already processed (106/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1693943 - already processed (107/2596) 2025-11-20 14:00:35,401 [INFO] Skipping bill 1686454 - already processed (108/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1847404 - already processed (109/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1683775 - already processed (110/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1835452 - already processed (111/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1709727 - already processed (112/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1849724 - already processed (113/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1761500 - already processed (114/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1697048 - already processed (115/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1860070 - already processed (116/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1771300 - already processed (117/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1709708 - already processed (118/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1848529 - already processed (119/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1845179 - already processed (120/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1849404 - already processed (121/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1714444 - already processed (122/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1824468 - already processed (123/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1882346 - already processed (124/2596) 2025-11-20 14:00:35,402 [INFO] Skipping bill 1885654 - already processed (125/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1849359 - already processed (126/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1840414 - already processed (127/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1846229 - already processed (128/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1707510 - already processed (129/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1845188 - already processed (130/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1848524 - already processed (131/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1847496 - already processed (132/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1883008 - already processed (133/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1649620 - already processed (134/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1667841 - already processed (135/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1848476 - already processed (136/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1649670 - already processed (137/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1667891 - already processed (138/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1649612 - already processed (139/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1649615 - already processed (140/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1667833 - already processed (141/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1667836 - already processed (142/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1649618 - already processed (143/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1667839 - already processed (144/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1649630 - already processed (145/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1649619 - already processed (146/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1667851 - already processed (147/2596) 2025-11-20 14:00:35,403 [INFO] Skipping bill 1667840 - already processed (148/2596) 2025-11-20 14:00:35,403 [INFO] Processing 149/2596: Bill ID 1865211 2025-11-20 14:00:36,217 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:36,218 [ERROR] Failed to generate report for bill 1865211: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:37,228 [INFO] Skipping bill 1667837 - already processed (150/2596) 2025-11-20 14:00:37,228 [INFO] Skipping bill 1667892 - already processed (151/2596) 2025-11-20 14:00:37,229 [INFO] Skipping bill 1649616 - already processed (152/2596) 2025-11-20 14:00:37,229 [INFO] Skipping bill 1649671 - already processed (153/2596) 2025-11-20 14:00:37,229 [INFO] Processing 154/2596: Bill ID 1726105 2025-11-20 14:00:38,384 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:38,385 [ERROR] Failed to generate report for bill 1726105: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:39,393 [INFO] Skipping bill 1978757 - already processed (155/2596) 2025-11-20 14:00:39,393 [INFO] Skipping bill 1980543 - already processed (156/2596) 2025-11-20 14:00:39,393 [INFO] Skipping bill 1893423 - already processed (157/2596) 2025-11-20 14:00:39,393 [INFO] Skipping bill 1964699 - already processed (158/2596) 2025-11-20 14:00:39,393 [INFO] Skipping bill 1978599 - already processed (159/2596) 2025-11-20 14:00:39,394 [INFO] Skipping bill 1980563 - already processed (160/2596) 2025-11-20 14:00:39,394 [INFO] Skipping bill 1976585 - already processed (161/2596) 2025-11-20 14:00:39,394 [INFO] Skipping bill 1904800 - already processed (162/2596) 2025-11-20 14:00:39,394 [INFO] Skipping bill 1974530 - already processed (163/2596) 2025-11-20 14:00:39,394 [INFO] Skipping bill 1964676 - already processed (164/2596) 2025-11-20 14:00:39,394 [INFO] Skipping bill 1955758 - already processed (165/2596) 2025-11-20 14:00:39,394 [INFO] Skipping bill 1941749 - already processed (166/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1976440 - already processed (167/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1978812 - already processed (168/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1978731 - already processed (169/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1949687 - already processed (170/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1980302 - already processed (171/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 2032041 - already processed (172/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1978672 - already processed (173/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1955756 - already processed (174/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1970455 - already processed (175/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1978694 - already processed (176/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1976550 - already processed (177/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1908207 - already processed (178/2596) 2025-11-20 14:00:39,395 [INFO] Skipping bill 1971712 - already processed (179/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1919273 - already processed (180/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1893452 - already processed (181/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1971760 - already processed (182/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1978553 - already processed (183/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1980501 - already processed (184/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1980139 - already processed (185/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1908210 - already processed (186/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1980228 - already processed (187/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1947445 - already processed (188/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1971753 - already processed (189/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1943407 - already processed (190/2596) 2025-11-20 14:00:39,396 [INFO] Skipping bill 1896630 - already processed (191/2596) 2025-11-20 14:00:39,397 [INFO] Skipping bill 1953097 - already processed (192/2596) 2025-11-20 14:00:39,397 [INFO] Skipping bill 1961095 - already processed (193/2596) 2025-11-20 14:00:39,397 [INFO] Skipping bill 1953091 - already processed (194/2596) 2025-11-20 14:00:39,397 [INFO] Skipping bill 1953081 - already processed (195/2596) 2025-11-20 14:00:39,397 [INFO] Skipping bill 1978871 - already processed (196/2596) 2025-11-20 14:00:39,397 [INFO] Skipping bill 1990396 - already processed (197/2596) 2025-11-20 14:00:39,397 [INFO] Processing 198/2596: Bill ID 1980067 2025-11-20 14:00:40,313 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:40,314 [ERROR] Failed to generate report for bill 1980067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:41,325 [INFO] Skipping bill 1970450 - already processed (199/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 1904793 - already processed (200/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 1964689 - already processed (201/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 1933300 - already processed (202/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 2036404 - already processed (203/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 1949685 - already processed (204/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 1976474 - already processed (205/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 1898373 - already processed (206/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 2042443 - already processed (207/2596) 2025-11-20 14:00:41,326 [INFO] Skipping bill 2005483 - already processed (208/2596) 2025-11-20 14:00:41,327 [INFO] Skipping bill 1968261 - already processed (209/2596) 2025-11-20 14:00:41,327 [INFO] Skipping bill 1980234 - already processed (210/2596) 2025-11-20 14:00:41,327 [INFO] Skipping bill 1978559 - already processed (211/2596) 2025-11-20 14:00:41,327 [INFO] Skipping bill 1974545 - already processed (212/2596) 2025-11-20 14:00:41,327 [INFO] Skipping bill 1908089 - already processed (213/2596) 2025-11-20 14:00:41,327 [INFO] Skipping bill 1939198 - already processed (214/2596) 2025-11-20 14:00:41,327 [INFO] Skipping bill 1939199 - already processed (215/2596) 2025-11-20 14:00:41,327 [INFO] Skipping bill 1908087 - already processed (216/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1908088 - already processed (217/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1939200 - already processed (218/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1939201 - already processed (219/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1908090 - already processed (220/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1939197 - already processed (221/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1908086 - already processed (222/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1651326 - already processed (223/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1747628 - already processed (224/2596) 2025-11-20 14:00:41,328 [INFO] Skipping bill 1871619 - already processed (225/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 1874953 - already processed (226/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 1831016 - already processed (227/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 1846007 - already processed (228/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 2026977 - already processed (229/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 2042502 - already processed (230/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 2042537 - already processed (231/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 2042540 - already processed (232/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 1907590 - already processed (233/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 1907863 - already processed (234/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 2022323 - already processed (235/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 1947638 - already processed (236/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 1965815 - already processed (237/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 2042471 - already processed (238/2596) 2025-11-20 14:00:41,329 [INFO] Skipping bill 2017117 - already processed (239/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 1973900 - already processed (240/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 2020829 - already processed (241/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 1718823 - already processed (242/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 1709526 - already processed (243/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 1709356 - already processed (244/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 1839016 - already processed (245/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 1859941 - already processed (246/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 1839023 - already processed (247/2596) 2025-11-20 14:00:41,330 [INFO] Skipping bill 1860727 - already processed (248/2596) 2025-11-20 14:00:41,330 [INFO] Processing 249/2596: Bill ID 1876979 2025-11-20 14:00:41,850 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:41,851 [ERROR] Failed to generate report for bill 1876979: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:42,861 [INFO] Skipping bill 1905069 - already processed (250/2596) 2025-11-20 14:00:42,862 [INFO] Skipping bill 1992824 - already processed (251/2596) 2025-11-20 14:00:42,862 [INFO] Skipping bill 1957876 - already processed (252/2596) 2025-11-20 14:00:42,862 [INFO] Skipping bill 1965500 - already processed (253/2596) 2025-11-20 14:00:42,862 [INFO] Skipping bill 1990151 - already processed (254/2596) 2025-11-20 14:00:42,862 [INFO] Skipping bill 1949174 - already processed (255/2596) 2025-11-20 14:00:42,862 [INFO] Skipping bill 1905038 - already processed (256/2596) 2025-11-20 14:00:42,863 [INFO] Skipping bill 1905159 - already processed (257/2596) 2025-11-20 14:00:42,863 [INFO] Skipping bill 1907650 - already processed (258/2596) 2025-11-20 14:00:42,863 [INFO] Skipping bill 1909616 - already processed (259/2596) 2025-11-20 14:00:42,863 [INFO] Skipping bill 1909665 - already processed (260/2596) 2025-11-20 14:00:42,863 [INFO] Skipping bill 1928585 - already processed (261/2596) 2025-11-20 14:00:42,863 [INFO] Skipping bill 1928759 - already processed (262/2596) 2025-11-20 14:00:42,863 [INFO] Skipping bill 1928904 - already processed (263/2596) 2025-11-20 14:00:42,863 [INFO] Skipping bill 1931737 - already processed (264/2596) 2025-11-20 14:00:42,866 [INFO] Skipping bill 1928076 - already processed (265/2596) 2025-11-20 14:00:42,866 [INFO] Skipping bill 1935956 - already processed (266/2596) 2025-11-20 14:00:42,866 [INFO] Skipping bill 1905222 - already processed (267/2596) 2025-11-20 14:00:42,866 [INFO] Skipping bill 1932777 - already processed (268/2596) 2025-11-20 14:00:42,866 [INFO] Skipping bill 1905141 - already processed (269/2596) 2025-11-20 14:00:42,867 [INFO] Processing 270/2596: Bill ID 2034928 2025-11-20 14:00:44,110 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:44,111 [ERROR] Failed to generate report for bill 2034928: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:44,163 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-20 14:00:44,164 [INFO] Progress: 270/2596 - Processed: 0, Skipped: 264, Errors: 6 2025-11-20 14:00:45,169 [INFO] Skipping bill 1820947 - already processed (271/2596) 2025-11-20 14:00:45,170 [INFO] Skipping bill 2038143 - already processed (272/2596) 2025-11-20 14:00:45,170 [INFO] Skipping bill 1946119 - already processed (273/2596) 2025-11-20 14:00:45,170 [INFO] Skipping bill 2038726 - already processed (274/2596) 2025-11-20 14:00:45,170 [INFO] Skipping bill 2015494 - already processed (275/2596) 2025-11-20 14:00:45,170 [INFO] Skipping bill 1754732 - already processed (276/2596) 2025-11-20 14:00:45,171 [INFO] Skipping bill 1716623 - already processed (277/2596) 2025-11-20 14:00:45,171 [INFO] Skipping bill 1723029 - already processed (278/2596) 2025-11-20 14:00:45,171 [INFO] Skipping bill 1749221 - already processed (279/2596) 2025-11-20 14:00:45,171 [INFO] Skipping bill 1756757 - already processed (280/2596) 2025-11-20 14:00:45,171 [INFO] Skipping bill 1722774 - already processed (281/2596) 2025-11-20 14:00:45,171 [INFO] Processing 282/2596: Bill ID 1746175 2025-11-20 14:00:46,665 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:46,666 [ERROR] Failed to generate report for bill 1746175: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:47,678 [INFO] Skipping bill 1749049 - already processed (283/2596) 2025-11-20 14:00:47,678 [INFO] Skipping bill 1799517 - already processed (284/2596) 2025-11-20 14:00:47,679 [INFO] Skipping bill 1799058 - already processed (285/2596) 2025-11-20 14:00:47,679 [INFO] Skipping bill 1792427 - already processed (286/2596) 2025-11-20 14:00:47,679 [INFO] Skipping bill 1791537 - already processed (287/2596) 2025-11-20 14:00:47,679 [INFO] Skipping bill 1793699 - already processed (288/2596) 2025-11-20 14:00:47,679 [INFO] Skipping bill 1784035 - already processed (289/2596) 2025-11-20 14:00:47,679 [INFO] Skipping bill 1789608 - already processed (290/2596) 2025-11-20 14:00:47,679 [INFO] Skipping bill 1797287 - already processed (291/2596) 2025-11-20 14:00:47,680 [INFO] Skipping bill 1799146 - already processed (292/2596) 2025-11-20 14:00:47,680 [INFO] Skipping bill 1799256 - already processed (293/2596) 2025-11-20 14:00:47,680 [INFO] Skipping bill 1799530 - already processed (294/2596) 2025-11-20 14:00:47,680 [INFO] Skipping bill 1799073 - already processed (295/2596) 2025-11-20 14:00:47,680 [INFO] Skipping bill 1798525 - already processed (296/2596) 2025-11-20 14:00:47,680 [INFO] Skipping bill 1812862 - already processed (297/2596) 2025-11-20 14:00:47,680 [INFO] Skipping bill 1799556 - already processed (298/2596) 2025-11-20 14:00:47,680 [INFO] Skipping bill 1793796 - already processed (299/2596) 2025-11-20 14:00:47,681 [INFO] Skipping bill 1840899 - already processed (300/2596) 2025-11-20 14:00:47,681 [INFO] Skipping bill 1849855 - already processed (301/2596) 2025-11-20 14:00:47,681 [INFO] Skipping bill 1796581 - already processed (302/2596) 2025-11-20 14:00:47,681 [INFO] Skipping bill 1785974 - already processed (303/2596) 2025-11-20 14:00:47,681 [INFO] Skipping bill 1799599 - already processed (304/2596) 2025-11-20 14:00:47,681 [INFO] Skipping bill 1799188 - already processed (305/2596) 2025-11-20 14:00:47,681 [INFO] Skipping bill 1834738 - already processed (306/2596) 2025-11-20 14:00:47,681 [INFO] Skipping bill 1799528 - already processed (307/2596) 2025-11-20 14:00:47,681 [INFO] Processing 308/2596: Bill ID 1829539 2025-11-20 14:00:49,018 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:49,019 [ERROR] Failed to generate report for bill 1829539: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:50,027 [INFO] Skipping bill 1953506 - already processed (309/2596) 2025-11-20 14:00:50,028 [INFO] Skipping bill 1969171 - already processed (310/2596) 2025-11-20 14:00:50,028 [INFO] Skipping bill 1963529 - already processed (311/2596) 2025-11-20 14:00:50,028 [INFO] Skipping bill 1973172 - already processed (312/2596) 2025-11-20 14:00:50,028 [INFO] Skipping bill 1977164 - already processed (313/2596) 2025-11-20 14:00:50,028 [INFO] Skipping bill 1984764 - already processed (314/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1988421 - already processed (315/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1963407 - already processed (316/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1977647 - already processed (317/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1985537 - already processed (318/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1988809 - already processed (319/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1989241 - already processed (320/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1980688 - already processed (321/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1985490 - already processed (322/2596) 2025-11-20 14:00:50,029 [INFO] Skipping bill 1987236 - already processed (323/2596) 2025-11-20 14:00:50,030 [INFO] Skipping bill 2009168 - already processed (324/2596) 2025-11-20 14:00:50,030 [INFO] Skipping bill 1985684 - already processed (325/2596) 2025-11-20 14:00:50,030 [INFO] Skipping bill 1982957 - already processed (326/2596) 2025-11-20 14:00:50,030 [INFO] Skipping bill 2009660 - already processed (327/2596) 2025-11-20 14:00:50,030 [INFO] Skipping bill 1987290 - already processed (328/2596) 2025-11-20 14:00:50,030 [INFO] Skipping bill 2021527 - already processed (329/2596) 2025-11-20 14:00:50,030 [INFO] Skipping bill 1984006 - already processed (330/2596) 2025-11-20 14:00:50,030 [INFO] Skipping bill 1944378 - already processed (331/2596) 2025-11-20 14:00:50,030 [INFO] Processing 332/2596: Bill ID 2016312 2025-11-20 14:00:51,576 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:51,577 [ERROR] Failed to generate report for bill 2016312: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:52,582 [INFO] Skipping bill 1975511 - already processed (333/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1807866 - already processed (334/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1825040 - already processed (335/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1824663 - already processed (336/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1827759 - already processed (337/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1807849 - already processed (338/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1852469 - already processed (339/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1724818 - already processed (340/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1827801 - already processed (341/2596) 2025-11-20 14:00:52,582 [INFO] Skipping bill 1842042 - already processed (342/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1800509 - already processed (343/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1829048 - already processed (344/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1691393 - already processed (345/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1684843 - already processed (346/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1945161 - already processed (347/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1947679 - already processed (348/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1943273 - already processed (349/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1919150 - already processed (350/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 2012228 - already processed (351/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1990355 - already processed (352/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1960995 - already processed (353/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1968119 - already processed (354/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 2006978 - already processed (355/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974144 - already processed (356/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974243 - already processed (357/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974425 - already processed (358/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 2016144 - already processed (359/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974177 - already processed (360/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974222 - already processed (361/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974239 - already processed (362/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974292 - already processed (363/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974356 - already processed (364/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974381 - already processed (365/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974418 - already processed (366/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1990318 - already processed (367/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1987837 - already processed (368/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1974421 - already processed (369/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1982057 - already processed (370/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1968164 - already processed (371/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1979990 - already processed (372/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1961023 - already processed (373/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1970366 - already processed (374/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1976266 - already processed (375/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1735435 - already processed (376/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1735103 - already processed (377/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1735239 - already processed (378/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1676639 - already processed (379/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1822936 - already processed (380/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1824099 - already processed (381/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1823066 - already processed (382/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1821100 - already processed (383/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1821376 - already processed (384/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1861884 - already processed (385/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1862091 - already processed (386/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1824408 - already processed (387/2596) 2025-11-20 14:00:52,583 [INFO] Skipping bill 1823094 - already processed (388/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1859976 - already processed (389/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1860020 - already processed (390/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1822457 - already processed (391/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1823240 - already processed (392/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1822425 - already processed (393/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1823305 - already processed (394/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1816605 - already processed (395/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1822519 - already processed (396/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1822760 - already processed (397/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1821542 - already processed (398/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1862395 - already processed (399/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1862180 - already processed (400/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1820992 - already processed (401/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1822908 - already processed (402/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1816124 - already processed (403/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1826161 - already processed (404/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1822451 - already processed (405/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1823328 - already processed (406/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1860844 - already processed (407/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1819671 - already processed (408/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1815658 - already processed (409/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1929168 - already processed (410/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1939103 - already processed (411/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1939150 - already processed (412/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1924410 - already processed (413/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1929804 - already processed (414/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1929561 - already processed (415/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1925992 - already processed (416/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1928926 - already processed (417/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1931961 - already processed (418/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1929636 - already processed (419/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1909994 - already processed (420/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1928408 - already processed (421/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1928598 - already processed (422/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1994243 - already processed (423/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1994303 - already processed (424/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1929659 - already processed (425/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1932766 - already processed (426/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1928570 - already processed (427/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1934608 - already processed (428/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1928364 - already processed (429/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1929760 - already processed (430/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1933272 - already processed (431/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1929496 - already processed (432/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1990347 - already processed (433/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1995251 - already processed (434/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1995449 - already processed (435/2596) 2025-11-20 14:00:52,584 [INFO] Skipping bill 1995259 - already processed (436/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1995271 - already processed (437/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1995747 - already processed (438/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1991557 - already processed (439/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1991563 - already processed (440/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1995783 - already processed (441/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1929457 - already processed (442/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1915997 - already processed (443/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1933178 - already processed (444/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1992758 - already processed (445/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1993026 - already processed (446/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1995569 - already processed (447/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1992805 - already processed (448/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1995900 - already processed (449/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1993019 - already processed (450/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1847870 - already processed (451/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1812600 - already processed (452/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1848008 - already processed (453/2596) 2025-11-20 14:00:52,585 [INFO] Skipping bill 1825516 - already processed (454/2596) 2025-11-20 14:00:52,585 [INFO] Processing 455/2596: Bill ID 1845026 2025-11-20 14:00:53,038 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:53,039 [ERROR] Failed to generate report for bill 1845026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:54,049 [INFO] Skipping bill 1962312 - already processed (456/2596) 2025-11-20 14:00:54,049 [INFO] Skipping bill 1954011 - already processed (457/2596) 2025-11-20 14:00:54,049 [INFO] Skipping bill 1991380 - already processed (458/2596) 2025-11-20 14:00:54,049 [INFO] Processing 459/2596: Bill ID 2011846 2025-11-20 14:00:54,957 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:54,958 [ERROR] Failed to generate report for bill 2011846: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:55,966 [INFO] Skipping bill 1838778 - already processed (460/2596) 2025-11-20 14:00:55,967 [INFO] Skipping bill 1713666 - already processed (461/2596) 2025-11-20 14:00:55,967 [INFO] Skipping bill 1837146 - already processed (462/2596) 2025-11-20 14:00:55,967 [INFO] Skipping bill 1842401 - already processed (463/2596) 2025-11-20 14:00:55,967 [INFO] Skipping bill 1838992 - already processed (464/2596) 2025-11-20 14:00:55,967 [INFO] Skipping bill 1840748 - already processed (465/2596) 2025-11-20 14:00:55,968 [INFO] Skipping bill 1841780 - already processed (466/2596) 2025-11-20 14:00:55,968 [INFO] Skipping bill 1831504 - already processed (467/2596) 2025-11-20 14:00:55,968 [INFO] Skipping bill 1832905 - already processed (468/2596) 2025-11-20 14:00:55,968 [INFO] Skipping bill 1843072 - already processed (469/2596) 2025-11-20 14:00:55,968 [INFO] Skipping bill 1839869 - already processed (470/2596) 2025-11-20 14:00:55,968 [INFO] Skipping bill 1814012 - already processed (471/2596) 2025-11-20 14:00:55,968 [INFO] Skipping bill 1842520 - already processed (472/2596) 2025-11-20 14:00:55,970 [INFO] Skipping bill 1835262 - already processed (473/2596) 2025-11-20 14:00:55,970 [INFO] Skipping bill 1843020 - already processed (474/2596) 2025-11-20 14:00:55,970 [INFO] Skipping bill 1878243 - already processed (475/2596) 2025-11-20 14:00:55,970 [INFO] Skipping bill 1893072 - already processed (476/2596) 2025-11-20 14:00:55,970 [INFO] Skipping bill 1713755 - already processed (477/2596) 2025-11-20 14:00:55,970 [INFO] Skipping bill 1842316 - already processed (478/2596) 2025-11-20 14:00:55,970 [INFO] Skipping bill 1838852 - already processed (479/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1838748 - already processed (480/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1635340 - already processed (481/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1713127 - already processed (482/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1818470 - already processed (483/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1837189 - already processed (484/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1635556 - already processed (485/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1692465 - already processed (486/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1843326 - already processed (487/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1822203 - already processed (488/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1838434 - already processed (489/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1714042 - already processed (490/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1840824 - already processed (491/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1810043 - already processed (492/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1762665 - already processed (493/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1831619 - already processed (494/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1712988 - already processed (495/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1704077 - already processed (496/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1712903 - already processed (497/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1818714 - already processed (498/2596) 2025-11-20 14:00:55,971 [INFO] Skipping bill 1842743 - already processed (499/2596) 2025-11-20 14:00:55,972 [INFO] Processing 500/2596: Bill ID 1838518 2025-11-20 14:00:58,222 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:58,225 [ERROR] Failed to generate report for bill 1838518: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:00:58,285 [INFO] Saved 2564 reports to data/bill_reports.json 2025-11-20 14:00:58,285 [INFO] Progress: 500/2596 - Processed: 0, Skipped: 488, Errors: 12 2025-11-20 14:00:59,290 [INFO] Processing 501/2596: Bill ID 1794181 2025-11-20 14:00:59,823 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:00:59,824 [ERROR] Failed to generate report for bill 1794181: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:00,834 [INFO] Processing 502/2596: Bill ID 1708593 2025-11-20 14:01:01,335 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:01,337 [ERROR] Failed to generate report for bill 1708593: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:02,345 [INFO] Processing 503/2596: Bill ID 1704148 2025-11-20 14:01:04,504 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:04,505 [ERROR] Failed to generate report for bill 1704148: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:05,515 [INFO] Processing 504/2596: Bill ID 1704278 2025-11-20 14:01:07,372 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:07,374 [ERROR] Failed to generate report for bill 1704278: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:08,384 [INFO] Skipping bill 1714051 - already processed (505/2596) 2025-11-20 14:01:08,384 [INFO] Skipping bill 1951980 - already processed (506/2596) 2025-11-20 14:01:08,384 [INFO] Skipping bill 1942546 - already processed (507/2596) 2025-11-20 14:01:08,384 [INFO] Skipping bill 1954662 - already processed (508/2596) 2025-11-20 14:01:08,384 [INFO] Skipping bill 1962278 - already processed (509/2596) 2025-11-20 14:01:08,385 [INFO] Skipping bill 1959604 - already processed (510/2596) 2025-11-20 14:01:08,385 [INFO] Skipping bill 1961963 - already processed (511/2596) 2025-11-20 14:01:08,386 [INFO] Skipping bill 1906420 - already processed (512/2596) 2025-11-20 14:01:08,386 [INFO] Skipping bill 1959700 - already processed (513/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1960223 - already processed (514/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1955104 - already processed (515/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1962582 - already processed (516/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1945671 - already processed (517/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1927329 - already processed (518/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1950703 - already processed (519/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1962488 - already processed (520/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1945525 - already processed (521/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1958920 - already processed (522/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1962097 - already processed (523/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1963192 - already processed (524/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1947169 - already processed (525/2596) 2025-11-20 14:01:08,387 [INFO] Skipping bill 1961929 - already processed (526/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1962057 - already processed (527/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1973797 - already processed (528/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1963087 - already processed (529/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1940139 - already processed (530/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1941211 - already processed (531/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1906434 - already processed (532/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1963178 - already processed (533/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1954188 - already processed (534/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1954475 - already processed (535/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1957381 - already processed (536/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1962329 - already processed (537/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1962675 - already processed (538/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1935756 - already processed (539/2596) 2025-11-20 14:01:08,388 [INFO] Skipping bill 1945467 - already processed (540/2596) 2025-11-20 14:01:08,389 [INFO] Skipping bill 1907066 - already processed (541/2596) 2025-11-20 14:01:08,389 [INFO] Skipping bill 1985138 - already processed (542/2596) 2025-11-20 14:01:08,389 [INFO] Skipping bill 1961501 - already processed (543/2596) 2025-11-20 14:01:08,389 [INFO] Skipping bill 1962291 - already processed (544/2596) 2025-11-20 14:01:08,389 [INFO] Skipping bill 2034790 - already processed (545/2596) 2025-11-20 14:01:08,389 [INFO] Processing 546/2596: Bill ID 2047690 2025-11-20 14:01:22,916 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:01:22,928 [INFO] Processing 547/2596: Bill ID 2052256 2025-11-20 14:01:36,634 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:01:36,639 [INFO] Skipping bill 1962885 - already processed (548/2596) 2025-11-20 14:01:36,639 [INFO] Skipping bill 1960413 - already processed (549/2596) 2025-11-20 14:01:36,639 [INFO] Skipping bill 1959956 - already processed (550/2596) 2025-11-20 14:01:36,639 [INFO] Processing 551/2596: Bill ID 1962986 2025-11-20 14:01:39,809 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:39,812 [ERROR] Failed to generate report for bill 1962986: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:40,823 [INFO] Processing 552/2596: Bill ID 1960510 2025-11-20 14:01:41,447 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:41,450 [ERROR] Failed to generate report for bill 1960510: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:42,463 [INFO] Skipping bill 1962952 - already processed (553/2596) 2025-11-20 14:01:42,466 [INFO] Processing 554/2596: Bill ID 1645841 2025-11-20 14:01:43,090 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:43,091 [ERROR] Failed to generate report for bill 1645841: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:44,102 [INFO] Skipping bill 1799709 - already processed (555/2596) 2025-11-20 14:01:44,102 [INFO] Skipping bill 1797422 - already processed (556/2596) 2025-11-20 14:01:44,102 [INFO] Skipping bill 1801018 - already processed (557/2596) 2025-11-20 14:01:44,102 [INFO] Skipping bill 1799688 - already processed (558/2596) 2025-11-20 14:01:44,103 [INFO] Skipping bill 1909475 - already processed (559/2596) 2025-11-20 14:01:44,103 [INFO] Skipping bill 1921138 - already processed (560/2596) 2025-11-20 14:01:44,103 [INFO] Skipping bill 1917007 - already processed (561/2596) 2025-11-20 14:01:44,103 [INFO] Skipping bill 1921879 - already processed (562/2596) 2025-11-20 14:01:44,103 [INFO] Skipping bill 1915249 - already processed (563/2596) 2025-11-20 14:01:44,103 [INFO] Skipping bill 1912345 - already processed (564/2596) 2025-11-20 14:01:44,103 [INFO] Processing 565/2596: Bill ID 1897676 2025-11-20 14:01:44,722 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:44,723 [ERROR] Failed to generate report for bill 1897676: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:45,731 [INFO] Skipping bill 1847772 - already processed (566/2596) 2025-11-20 14:01:45,732 [INFO] Skipping bill 1825218 - already processed (567/2596) 2025-11-20 14:01:45,732 [INFO] Skipping bill 1839463 - already processed (568/2596) 2025-11-20 14:01:45,732 [INFO] Skipping bill 1665194 - already processed (569/2596) 2025-11-20 14:01:45,732 [INFO] Skipping bill 1708118 - already processed (570/2596) 2025-11-20 14:01:45,732 [INFO] Skipping bill 1802090 - already processed (571/2596) 2025-11-20 14:01:45,733 [INFO] Skipping bill 1823725 - already processed (572/2596) 2025-11-20 14:01:45,733 [INFO] Skipping bill 1845657 - already processed (573/2596) 2025-11-20 14:01:45,733 [INFO] Skipping bill 1846612 - already processed (574/2596) 2025-11-20 14:01:45,733 [INFO] Skipping bill 1870077 - already processed (575/2596) 2025-11-20 14:01:45,733 [INFO] Skipping bill 1870897 - already processed (576/2596) 2025-11-20 14:01:45,733 [INFO] Skipping bill 1761153 - already processed (577/2596) 2025-11-20 14:01:45,733 [INFO] Skipping bill 1760883 - already processed (578/2596) 2025-11-20 14:01:45,733 [INFO] Skipping bill 1752922 - already processed (579/2596) 2025-11-20 14:01:45,734 [INFO] Skipping bill 1873484 - already processed (580/2596) 2025-11-20 14:01:45,734 [INFO] Skipping bill 1990915 - already processed (581/2596) 2025-11-20 14:01:45,734 [INFO] Skipping bill 1969038 - already processed (582/2596) 2025-11-20 14:01:45,734 [INFO] Skipping bill 1993838 - already processed (583/2596) 2025-11-20 14:01:45,736 [INFO] Skipping bill 1958795 - already processed (584/2596) 2025-11-20 14:01:45,736 [INFO] Skipping bill 1977734 - already processed (585/2596) 2025-11-20 14:01:45,736 [INFO] Skipping bill 1937592 - already processed (586/2596) 2025-11-20 14:01:45,736 [INFO] Skipping bill 1963811 - already processed (587/2596) 2025-11-20 14:01:45,736 [INFO] Skipping bill 2029033 - already processed (588/2596) 2025-11-20 14:01:45,737 [INFO] Skipping bill 2026836 - already processed (589/2596) 2025-11-20 14:01:45,737 [INFO] Skipping bill 2027180 - already processed (590/2596) 2025-11-20 14:01:45,737 [INFO] Skipping bill 2021349 - already processed (591/2596) 2025-11-20 14:01:45,737 [INFO] Skipping bill 2030059 - already processed (592/2596) 2025-11-20 14:01:45,737 [INFO] Skipping bill 1823829 - already processed (593/2596) 2025-11-20 14:01:45,737 [INFO] Skipping bill 1824037 - already processed (594/2596) 2025-11-20 14:01:45,737 [INFO] Skipping bill 1850989 - already processed (595/2596) 2025-11-20 14:01:45,738 [INFO] Skipping bill 1826921 - already processed (596/2596) 2025-11-20 14:01:45,738 [INFO] Skipping bill 1690087 - already processed (597/2596) 2025-11-20 14:01:45,738 [INFO] Processing 598/2596: Bill ID 1693524 2025-11-20 14:01:46,566 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:46,567 [ERROR] Failed to generate report for bill 1693524: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:47,573 [INFO] Skipping bill 1665637 - already processed (599/2596) 2025-11-20 14:01:47,573 [INFO] Skipping bill 1682635 - already processed (600/2596) 2025-11-20 14:01:47,574 [INFO] Processing 601/2596: Bill ID 1692213 2025-11-20 14:01:48,328 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:48,330 [ERROR] Failed to generate report for bill 1692213: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:49,338 [INFO] Processing 602/2596: Bill ID 1846626 2025-11-20 14:01:50,049 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:50,051 [ERROR] Failed to generate report for bill 1846626: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:51,062 [INFO] Processing 603/2596: Bill ID 1846675 2025-11-20 14:01:51,790 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:51,792 [ERROR] Failed to generate report for bill 1846675: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:52,797 [INFO] Skipping bill 1653927 - already processed (604/2596) 2025-11-20 14:01:52,798 [INFO] Skipping bill 1959326 - already processed (605/2596) 2025-11-20 14:01:52,798 [INFO] Skipping bill 1948632 - already processed (606/2596) 2025-11-20 14:01:52,798 [INFO] Skipping bill 1955060 - already processed (607/2596) 2025-11-20 14:01:52,798 [INFO] Skipping bill 1946546 - already processed (608/2596) 2025-11-20 14:01:52,798 [INFO] Processing 609/2596: Bill ID 1916487 2025-11-20 14:01:53,529 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:53,532 [ERROR] Failed to generate report for bill 1916487: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:54,542 [INFO] Skipping bill 1949165 - already processed (610/2596) 2025-11-20 14:01:54,544 [INFO] Processing 611/2596: Bill ID 1938020 2025-11-20 14:01:55,374 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:55,378 [ERROR] Failed to generate report for bill 1938020: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:56,387 [INFO] Processing 612/2596: Bill ID 1937464 2025-11-20 14:01:57,218 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:57,220 [ERROR] Failed to generate report for bill 1937464: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:58,236 [INFO] Processing 613/2596: Bill ID 1713253 2025-11-20 14:01:58,854 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:01:58,856 [ERROR] Failed to generate report for bill 1713253: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:01:59,864 [INFO] Skipping bill 1804283 - already processed (614/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1795473 - already processed (615/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1855405 - already processed (616/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1848823 - already processed (617/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1842483 - already processed (618/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1854786 - already processed (619/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1795485 - already processed (620/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1854739 - already processed (621/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1799043 - already processed (622/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1974284 - already processed (623/2596) 2025-11-20 14:01:59,865 [INFO] Skipping bill 1974163 - already processed (624/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1994222 - already processed (625/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1970124 - already processed (626/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1908054 - already processed (627/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1904666 - already processed (628/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1975714 - already processed (629/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1974214 - already processed (630/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1765786 - already processed (631/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1751941 - already processed (632/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1747213 - already processed (633/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1872579 - already processed (634/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1831630 - already processed (635/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1869553 - already processed (636/2596) 2025-11-20 14:01:59,866 [INFO] Skipping bill 1856482 - already processed (637/2596) 2025-11-20 14:01:59,867 [INFO] Skipping bill 1877177 - already processed (638/2596) 2025-11-20 14:01:59,867 [INFO] Skipping bill 1856535 - already processed (639/2596) 2025-11-20 14:01:59,867 [INFO] Processing 640/2596: Bill ID 1856106 2025-11-20 14:02:00,293 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:00,294 [ERROR] Failed to generate report for bill 1856106: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:00,344 [INFO] Saved 2566 reports to data/bill_reports.json 2025-11-20 14:02:00,345 [INFO] Progress: 640/2596 - Processed: 2, Skipped: 609, Errors: 29 2025-11-20 14:02:01,350 [INFO] Skipping bill 2036140 - already processed (641/2596) 2025-11-20 14:02:01,351 [INFO] Skipping bill 2013841 - already processed (642/2596) 2025-11-20 14:02:01,352 [INFO] Skipping bill 2036152 - already processed (643/2596) 2025-11-20 14:02:01,352 [INFO] Skipping bill 2035054 - already processed (644/2596) 2025-11-20 14:02:01,352 [INFO] Skipping bill 2020836 - already processed (645/2596) 2025-11-20 14:02:01,353 [INFO] Skipping bill 2034414 - already processed (646/2596) 2025-11-20 14:02:01,353 [INFO] Skipping bill 2036147 - already processed (647/2596) 2025-11-20 14:02:01,353 [INFO] Skipping bill 2017245 - already processed (648/2596) 2025-11-20 14:02:01,353 [INFO] Processing 649/2596: Bill ID 2020366 2025-11-20 14:02:01,825 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:01,827 [ERROR] Failed to generate report for bill 2020366: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:02,836 [INFO] Skipping bill 1754734 - already processed (650/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 1766525 - already processed (651/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 1993701 - already processed (652/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 2024454 - already processed (653/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 1989654 - already processed (654/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 1923257 - already processed (655/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 2012930 - already processed (656/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 2022043 - already processed (657/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 1977885 - already processed (658/2596) 2025-11-20 14:02:02,837 [INFO] Skipping bill 1903898 - already processed (659/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 2022085 - already processed (660/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 2024471 - already processed (661/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 1962449 - already processed (662/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 1948585 - already processed (663/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 2027763 - already processed (664/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 2038183 - already processed (665/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 2012908 - already processed (666/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 1703457 - already processed (667/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 1703326 - already processed (668/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 1703583 - already processed (669/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 1703488 - already processed (670/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 1694229 - already processed (671/2596) 2025-11-20 14:02:02,838 [INFO] Skipping bill 1697293 - already processed (672/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1694179 - already processed (673/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1707790 - already processed (674/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1691409 - already processed (675/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1679149 - already processed (676/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1697468 - already processed (677/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1703148 - already processed (678/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1835739 - already processed (679/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1840482 - already processed (680/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1842215 - already processed (681/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1838035 - already processed (682/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1842106 - already processed (683/2596) 2025-11-20 14:02:02,839 [INFO] Skipping bill 1839236 - already processed (684/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1839142 - already processed (685/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1838028 - already processed (686/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1837867 - already processed (687/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1835606 - already processed (688/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1825025 - already processed (689/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1826297 - already processed (690/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1847549 - already processed (691/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1839307 - already processed (692/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1842129 - already processed (693/2596) 2025-11-20 14:02:02,840 [INFO] Skipping bill 1837909 - already processed (694/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1797714 - already processed (695/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1839204 - already processed (696/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1835710 - already processed (697/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1837838 - already processed (698/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1837893 - already processed (699/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1835695 - already processed (700/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1837995 - already processed (701/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1842172 - already processed (702/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1817737 - already processed (703/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1953268 - already processed (704/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1961326 - already processed (705/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1961123 - already processed (706/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1953218 - already processed (707/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1945231 - already processed (708/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1949851 - already processed (709/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1945281 - already processed (710/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1945285 - already processed (711/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1949794 - already processed (712/2596) 2025-11-20 14:02:02,841 [INFO] Skipping bill 1949746 - already processed (713/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1949835 - already processed (714/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1961190 - already processed (715/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1953113 - already processed (716/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1936713 - already processed (717/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1939378 - already processed (718/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1909925 - already processed (719/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1961341 - already processed (720/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1922403 - already processed (721/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1899660 - already processed (722/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1961327 - already processed (723/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1953223 - already processed (724/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1953246 - already processed (725/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1955835 - already processed (726/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1933617 - already processed (727/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1945335 - already processed (728/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1961410 - already processed (729/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1926508 - already processed (730/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1943426 - already processed (731/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1949808 - already processed (732/2596) 2025-11-20 14:02:02,842 [INFO] Skipping bill 1949848 - already processed (733/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1947517 - already processed (734/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1945267 - already processed (735/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1961205 - already processed (736/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1953214 - already processed (737/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1943446 - already processed (738/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1973042 - already processed (739/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1961299 - already processed (740/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1933601 - already processed (741/2596) 2025-11-20 14:02:02,843 [INFO] Skipping bill 1933621 - already processed (742/2596) 2025-11-20 14:02:02,843 [INFO] Processing 743/2596: Bill ID 1919287 2025-11-20 14:02:03,360 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:03,361 [ERROR] Failed to generate report for bill 1919287: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:04,370 [INFO] Skipping bill 1933460 - already processed (744/2596) 2025-11-20 14:02:04,371 [INFO] Skipping bill 1933670 - already processed (745/2596) 2025-11-20 14:02:04,371 [INFO] Skipping bill 1922377 - already processed (746/2596) 2025-11-20 14:02:04,371 [INFO] Skipping bill 1735361 - already processed (747/2596) 2025-11-20 14:02:04,371 [INFO] Skipping bill 1742559 - already processed (748/2596) 2025-11-20 14:02:04,372 [INFO] Skipping bill 1775856 - already processed (749/2596) 2025-11-20 14:02:04,372 [INFO] Skipping bill 1738097 - already processed (750/2596) 2025-11-20 14:02:04,372 [INFO] Skipping bill 1794760 - already processed (751/2596) 2025-11-20 14:02:04,372 [INFO] Skipping bill 1736131 - already processed (752/2596) 2025-11-20 14:02:04,372 [INFO] Skipping bill 1885778 - already processed (753/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1808592 - already processed (754/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1878825 - already processed (755/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1884638 - already processed (756/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1738996 - already processed (757/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1878228 - already processed (758/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1872865 - already processed (759/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1881167 - already processed (760/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1881743 - already processed (761/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1852772 - already processed (762/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1884104 - already processed (763/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1738794 - already processed (764/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1893080 - already processed (765/2596) 2025-11-20 14:02:04,373 [INFO] Skipping bill 1881922 - already processed (766/2596) 2025-11-20 14:02:04,374 [INFO] Skipping bill 1883178 - already processed (767/2596) 2025-11-20 14:02:04,374 [INFO] Skipping bill 1881587 - already processed (768/2596) 2025-11-20 14:02:04,374 [INFO] Skipping bill 1884487 - already processed (769/2596) 2025-11-20 14:02:04,374 [INFO] Skipping bill 1859182 - already processed (770/2596) 2025-11-20 14:02:04,374 [INFO] Skipping bill 1866861 - already processed (771/2596) 2025-11-20 14:02:04,374 [INFO] Processing 772/2596: Bill ID 1891836 2025-11-20 14:02:05,047 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:05,048 [ERROR] Failed to generate report for bill 1891836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:06,059 [INFO] Skipping bill 1883738 - already processed (773/2596) 2025-11-20 14:02:06,060 [INFO] Skipping bill 1682652 - already processed (774/2596) 2025-11-20 14:02:06,060 [INFO] Skipping bill 1742464 - already processed (775/2596) 2025-11-20 14:02:06,060 [INFO] Skipping bill 1728366 - already processed (776/2596) 2025-11-20 14:02:06,061 [INFO] Skipping bill 1726524 - already processed (777/2596) 2025-11-20 14:02:06,061 [INFO] Skipping bill 1737208 - already processed (778/2596) 2025-11-20 14:02:06,061 [INFO] Skipping bill 1749398 - already processed (779/2596) 2025-11-20 14:02:06,061 [INFO] Skipping bill 1738008 - already processed (780/2596) 2025-11-20 14:02:06,061 [INFO] Skipping bill 1735894 - already processed (781/2596) 2025-11-20 14:02:06,061 [INFO] Skipping bill 1841416 - already processed (782/2596) 2025-11-20 14:02:06,061 [INFO] Skipping bill 1736739 - already processed (783/2596) 2025-11-20 14:02:06,062 [INFO] Skipping bill 1737586 - already processed (784/2596) 2025-11-20 14:02:06,062 [INFO] Skipping bill 1884557 - already processed (785/2596) 2025-11-20 14:02:06,062 [INFO] Processing 786/2596: Bill ID 1875094 2025-11-20 14:02:12,680 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:12,682 [ERROR] Failed to generate report for bill 1875094: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:13,688 [INFO] Processing 787/2596: Bill ID 1755026 2025-11-20 14:02:14,421 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:14,422 [ERROR] Failed to generate report for bill 1755026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:15,429 [INFO] Processing 788/2596: Bill ID 1871591 2025-11-20 14:02:16,263 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:16,265 [ERROR] Failed to generate report for bill 1871591: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:17,272 [INFO] Processing 789/2596: Bill ID 1760451 2025-11-20 14:02:18,032 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:18,033 [ERROR] Failed to generate report for bill 1760451: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:19,045 [INFO] Processing 790/2596: Bill ID 1880948 2025-11-20 14:02:19,858 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:19,859 [ERROR] Failed to generate report for bill 1880948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:19,905 [INFO] Saved 2566 reports to data/bill_reports.json 2025-11-20 14:02:19,905 [INFO] Progress: 790/2596 - Processed: 2, Skipped: 751, Errors: 37 2025-11-20 14:02:20,910 [INFO] Processing 791/2596: Bill ID 1775764 2025-11-20 14:02:21,996 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:21,997 [ERROR] Failed to generate report for bill 1775764: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:23,005 [INFO] Processing 792/2596: Bill ID 1884634 2025-11-20 14:02:24,252 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:24,254 [ERROR] Failed to generate report for bill 1884634: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:25,264 [INFO] Skipping bill 2000828 - already processed (793/2596) 2025-11-20 14:02:25,264 [INFO] Skipping bill 2001551 - already processed (794/2596) 2025-11-20 14:02:25,264 [INFO] Skipping bill 1997130 - already processed (795/2596) 2025-11-20 14:02:25,264 [INFO] Skipping bill 2046647 - already processed (796/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2004206 - already processed (797/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 1998184 - already processed (798/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2002506 - already processed (799/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2002695 - already processed (800/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2047070 - already processed (801/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2002923 - already processed (802/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 1998946 - already processed (803/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 1997259 - already processed (804/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2001269 - already processed (805/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2000625 - already processed (806/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2002705 - already processed (807/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2046676 - already processed (808/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2046660 - already processed (809/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2003933 - already processed (810/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 1997268 - already processed (811/2596) 2025-11-20 14:02:25,265 [INFO] Skipping bill 2019724 - already processed (812/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 1997990 - already processed (813/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 1998675 - already processed (814/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 2002243 - already processed (815/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 1997584 - already processed (816/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 2002929 - already processed (817/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 2001175 - already processed (818/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 1998815 - already processed (819/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 1998575 - already processed (820/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 1999210 - already processed (821/2596) 2025-11-20 14:02:25,266 [INFO] Skipping bill 2001320 - already processed (822/2596) 2025-11-20 14:02:25,266 [INFO] Processing 823/2596: Bill ID 2053304 2025-11-20 14:02:36,229 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:02:36,232 [INFO] Skipping bill 2001993 - already processed (824/2596) 2025-11-20 14:02:36,232 [INFO] Skipping bill 1999288 - already processed (825/2596) 2025-11-20 14:02:36,232 [INFO] Skipping bill 1998331 - already processed (826/2596) 2025-11-20 14:02:36,232 [INFO] Skipping bill 2003746 - already processed (827/2596) 2025-11-20 14:02:36,232 [INFO] Skipping bill 1927181 - already processed (828/2596) 2025-11-20 14:02:36,232 [INFO] Skipping bill 2030259 - already processed (829/2596) 2025-11-20 14:02:36,232 [INFO] Skipping bill 1997622 - already processed (830/2596) 2025-11-20 14:02:36,232 [INFO] Processing 831/2596: Bill ID 2028594 2025-11-20 14:02:37,153 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:37,154 [ERROR] Failed to generate report for bill 2028594: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:38,163 [INFO] Processing 832/2596: Bill ID 2038620 2025-11-20 14:02:39,153 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:39,154 [ERROR] Failed to generate report for bill 2038620: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:40,163 [INFO] Processing 833/2596: Bill ID 2024637 2025-11-20 14:02:41,002 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:41,007 [ERROR] Failed to generate report for bill 2024637: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:42,017 [INFO] Skipping bill 1780182 - already processed (834/2596) 2025-11-20 14:02:42,017 [INFO] Skipping bill 1895692 - already processed (835/2596) 2025-11-20 14:02:42,018 [INFO] Skipping bill 1780190 - already processed (836/2596) 2025-11-20 14:02:42,018 [INFO] Skipping bill 1780196 - already processed (837/2596) 2025-11-20 14:02:42,018 [INFO] Skipping bill 1780166 - already processed (838/2596) 2025-11-20 14:02:42,018 [INFO] Skipping bill 1888099 - already processed (839/2596) 2025-11-20 14:02:42,018 [INFO] Skipping bill 1852983 - already processed (840/2596) 2025-11-20 14:02:42,018 [INFO] Skipping bill 1852813 - already processed (841/2596) 2025-11-20 14:02:42,019 [INFO] Skipping bill 2037995 - already processed (842/2596) 2025-11-20 14:02:42,019 [INFO] Skipping bill 2043787 - already processed (843/2596) 2025-11-20 14:02:42,019 [INFO] Skipping bill 2035241 - already processed (844/2596) 2025-11-20 14:02:42,019 [INFO] Skipping bill 2035278 - already processed (845/2596) 2025-11-20 14:02:42,019 [INFO] Skipping bill 2038014 - already processed (846/2596) 2025-11-20 14:02:42,019 [INFO] Skipping bill 2009885 - already processed (847/2596) 2025-11-20 14:02:42,019 [INFO] Skipping bill 2035768 - already processed (848/2596) 2025-11-20 14:02:42,019 [INFO] Skipping bill 2025453 - already processed (849/2596) 2025-11-20 14:02:42,020 [INFO] Skipping bill 2038856 - already processed (850/2596) 2025-11-20 14:02:42,020 [INFO] Skipping bill 2009892 - already processed (851/2596) 2025-11-20 14:02:42,020 [INFO] Skipping bill 1861260 - already processed (852/2596) 2025-11-20 14:02:42,020 [INFO] Skipping bill 1856334 - already processed (853/2596) 2025-11-20 14:02:42,020 [INFO] Skipping bill 1856821 - already processed (854/2596) 2025-11-20 14:02:42,020 [INFO] Skipping bill 1864646 - already processed (855/2596) 2025-11-20 14:02:42,020 [INFO] Skipping bill 1860647 - already processed (856/2596) 2025-11-20 14:02:42,021 [INFO] Skipping bill 1707979 - already processed (857/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1643078 - already processed (858/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1651590 - already processed (859/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1852405 - already processed (860/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1852812 - already processed (861/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1858711 - already processed (862/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1853103 - already processed (863/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1851979 - already processed (864/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1859186 - already processed (865/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1740589 - already processed (866/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1741802 - already processed (867/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1860410 - already processed (868/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1957720 - already processed (869/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1974786 - already processed (870/2596) 2025-11-20 14:02:42,022 [INFO] Skipping bill 1989670 - already processed (871/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1979597 - already processed (872/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1984757 - already processed (873/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2009204 - already processed (874/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2015254 - already processed (875/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1974962 - already processed (876/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2009276 - already processed (877/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1989103 - already processed (878/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1984950 - already processed (879/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1975975 - already processed (880/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2004610 - already processed (881/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2004938 - already processed (882/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1992603 - already processed (883/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1992640 - already processed (884/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1996293 - already processed (885/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2011831 - already processed (886/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2012661 - already processed (887/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1950967 - already processed (888/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1994787 - already processed (889/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2011159 - already processed (890/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2006411 - already processed (891/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2011256 - already processed (892/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2004789 - already processed (893/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1981280 - already processed (894/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 2009071 - already processed (895/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1967748 - already processed (896/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1707150 - already processed (897/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1669781 - already processed (898/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1643012 - already processed (899/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1848903 - already processed (900/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1848260 - already processed (901/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1820844 - already processed (902/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1851922 - already processed (903/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1850740 - already processed (904/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1838535 - already processed (905/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1851828 - already processed (906/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1863177 - already processed (907/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1852015 - already processed (908/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1818886 - already processed (909/2596) 2025-11-20 14:02:42,023 [INFO] Skipping bill 1852513 - already processed (910/2596) 2025-11-20 14:02:42,023 [INFO] Processing 911/2596: Bill ID 1851836 2025-11-20 14:02:42,809 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:02:42,811 [ERROR] Failed to generate report for bill 1851836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:02:43,825 [INFO] Skipping bill 1933975 - already processed (912/2596) 2025-11-20 14:02:43,825 [INFO] Skipping bill 1935092 - already processed (913/2596) 2025-11-20 14:02:43,825 [INFO] Skipping bill 1937681 - already processed (914/2596) 2025-11-20 14:02:43,825 [INFO] Skipping bill 1927333 - already processed (915/2596) 2025-11-20 14:02:43,825 [INFO] Skipping bill 1936069 - already processed (916/2596) 2025-11-20 14:02:43,825 [INFO] Skipping bill 1940299 - already processed (917/2596) 2025-11-20 14:02:43,825 [INFO] Skipping bill 1911677 - already processed (918/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1929973 - already processed (919/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1910359 - already processed (920/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1934687 - already processed (921/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1930038 - already processed (922/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1925325 - already processed (923/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1933890 - already processed (924/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1934898 - already processed (925/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 2034194 - already processed (926/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1972440 - already processed (927/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1934020 - already processed (928/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1912210 - already processed (929/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1634819 - already processed (930/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1634779 - already processed (931/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1836873 - already processed (932/2596) 2025-11-20 14:02:43,826 [INFO] Skipping bill 1834678 - already processed (933/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1790707 - already processed (934/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1852775 - already processed (935/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1897040 - already processed (936/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1898466 - already processed (937/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1893847 - already processed (938/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1983834 - already processed (939/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1988287 - already processed (940/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1894415 - already processed (941/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1917533 - already processed (942/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1900966 - already processed (943/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1972401 - already processed (944/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1988699 - already processed (945/2596) 2025-11-20 14:02:43,827 [INFO] Skipping bill 1988844 - already processed (946/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1894126 - already processed (947/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1974757 - already processed (948/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1717719 - already processed (949/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1912107 - already processed (950/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1941091 - already processed (951/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1916250 - already processed (952/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1974033 - already processed (953/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1895954 - already processed (954/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1974042 - already processed (955/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1981849 - already processed (956/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1979780 - already processed (957/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1896111 - already processed (958/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1971592 - already processed (959/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1971640 - already processed (960/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1896588 - already processed (961/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1981663 - already processed (962/2596) 2025-11-20 14:02:43,828 [INFO] Skipping bill 1867796 - already processed (963/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1867828 - already processed (964/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1813907 - already processed (965/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1814493 - already processed (966/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1867439 - already processed (967/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1814241 - already processed (968/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1935238 - already processed (969/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1908945 - already processed (970/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1980982 - already processed (971/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1934094 - already processed (972/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1931194 - already processed (973/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1915534 - already processed (974/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1927914 - already processed (975/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1710815 - already processed (976/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1748189 - already processed (977/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1746365 - already processed (978/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1965229 - already processed (979/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1999738 - already processed (980/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1989648 - already processed (981/2596) 2025-11-20 14:02:43,829 [INFO] Skipping bill 1946188 - already processed (982/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1892638 - already processed (983/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1944647 - already processed (984/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1983017 - already processed (985/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1954626 - already processed (986/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1977147 - already processed (987/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 2013424 - already processed (988/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 2013451 - already processed (989/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1953001 - already processed (990/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1982880 - already processed (991/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1989793 - already processed (992/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1954479 - already processed (993/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 2031601 - already processed (994/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 2009433 - already processed (995/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1901514 - already processed (996/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1651925 - already processed (997/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1793373 - already processed (998/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1793039 - already processed (999/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1792971 - already processed (1000/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1793409 - already processed (1001/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1793958 - already processed (1002/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1793284 - already processed (1003/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1938552 - already processed (1004/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1922870 - already processed (1005/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1803710 - already processed (1006/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1889722 - already processed (1007/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1892083 - already processed (1008/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1889346 - already processed (1009/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1889719 - already processed (1010/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1889335 - already processed (1011/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1897572 - already processed (1012/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1887538 - already processed (1013/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1887101 - already processed (1014/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1888624 - already processed (1015/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1877673 - already processed (1016/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1897803 - already processed (1017/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1889758 - already processed (1018/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1897565 - already processed (1019/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1853521 - already processed (1020/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1864839 - already processed (1021/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1879513 - already processed (1022/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1878078 - already processed (1023/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 2013662 - already processed (1024/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1897603 - already processed (1025/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1881186 - already processed (1026/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1983797 - already processed (1027/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 2023789 - already processed (1028/2596) 2025-11-20 14:02:43,830 [INFO] Skipping bill 1878049 - already processed (1029/2596) 2025-11-20 14:02:43,831 [INFO] Processing 1030/2596: Bill ID 2052496 2025-11-20 14:02:58,938 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:02:59,005 [INFO] Saved 2568 reports to data/bill_reports.json 2025-11-20 14:02:59,006 [INFO] Progress: 1030/2596 - Processed: 4, Skipped: 983, Errors: 43 2025-11-20 14:02:59,006 [INFO] Skipping bill 1807241 - already processed (1031/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1881870 - already processed (1032/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1881843 - already processed (1033/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 2030230 - already processed (1034/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 2022901 - already processed (1035/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1896879 - already processed (1036/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1889701 - already processed (1037/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1970250 - already processed (1038/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 2037153 - already processed (1039/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 2013635 - already processed (1040/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1883140 - already processed (1041/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1853367 - already processed (1042/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1801284 - already processed (1043/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1889518 - already processed (1044/2596) 2025-11-20 14:02:59,006 [INFO] Skipping bill 1888073 - already processed (1045/2596) 2025-11-20 14:02:59,006 [INFO] Processing 1046/2596: Bill ID 2052173 2025-11-20 14:03:10,740 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:03:10,743 [INFO] Skipping bill 2047520 - already processed (1047/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1889754 - already processed (1048/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1835303 - already processed (1049/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1949479 - already processed (1050/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 2022816 - already processed (1051/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1872559 - already processed (1052/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1875857 - already processed (1053/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1876467 - already processed (1054/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1876586 - already processed (1055/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 2038328 - already processed (1056/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1878887 - already processed (1057/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1853095 - already processed (1058/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1805407 - already processed (1059/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 2022907 - already processed (1060/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1949574 - already processed (1061/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1844841 - already processed (1062/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1864295 - already processed (1063/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1881176 - already processed (1064/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1837365 - already processed (1065/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1837180 - already processed (1066/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1887099 - already processed (1067/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 2028679 - already processed (1068/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 2030354 - already processed (1069/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 2008967 - already processed (1070/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1964010 - already processed (1071/2596) 2025-11-20 14:03:10,744 [INFO] Skipping bill 1882474 - already processed (1072/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1881178 - already processed (1073/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 2037324 - already processed (1074/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1806224 - already processed (1075/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1837135 - already processed (1076/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1805930 - already processed (1077/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1803406 - already processed (1078/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1883773 - already processed (1079/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1994137 - already processed (1080/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1881306 - already processed (1081/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1889726 - already processed (1082/2596) 2025-11-20 14:03:10,745 [INFO] Skipping bill 1889593 - already processed (1083/2596) 2025-11-20 14:03:10,745 [INFO] Processing 1084/2596: Bill ID 1883494 2025-11-20 14:03:11,507 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:11,511 [ERROR] Failed to generate report for bill 1883494: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:12,524 [INFO] Processing 1085/2596: Bill ID 1883535 2025-11-20 14:03:13,299 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:13,302 [ERROR] Failed to generate report for bill 1883535: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:14,310 [INFO] Processing 1086/2596: Bill ID 2038569 2025-11-20 14:03:15,142 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:15,143 [ERROR] Failed to generate report for bill 2038569: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:16,151 [INFO] Processing 1087/2596: Bill ID 2038571 2025-11-20 14:03:16,985 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:16,987 [ERROR] Failed to generate report for bill 2038571: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:17,995 [INFO] Skipping bill 1666814 - already processed (1088/2596) 2025-11-20 14:03:17,996 [INFO] Skipping bill 1722011 - already processed (1089/2596) 2025-11-20 14:03:17,996 [INFO] Skipping bill 1724398 - already processed (1090/2596) 2025-11-20 14:03:17,996 [INFO] Skipping bill 1676083 - already processed (1091/2596) 2025-11-20 14:03:17,996 [INFO] Skipping bill 1824011 - already processed (1092/2596) 2025-11-20 14:03:17,996 [INFO] Skipping bill 1824228 - already processed (1093/2596) 2025-11-20 14:03:17,996 [INFO] Skipping bill 1824028 - already processed (1094/2596) 2025-11-20 14:03:17,996 [INFO] Skipping bill 1834441 - already processed (1095/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1908238 - already processed (1096/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1967640 - already processed (1097/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1935448 - already processed (1098/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1987611 - already processed (1099/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1964156 - already processed (1100/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1947221 - already processed (1101/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1943110 - already processed (1102/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1964415 - already processed (1103/2596) 2025-11-20 14:03:17,997 [INFO] Skipping bill 1996731 - already processed (1104/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1944685 - already processed (1105/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1936020 - already processed (1106/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1947285 - already processed (1107/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1949498 - already processed (1108/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1933085 - already processed (1109/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1881403 - already processed (1110/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1878440 - already processed (1111/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1874641 - already processed (1112/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1780447 - already processed (1113/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1829313 - already processed (1114/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1876168 - already processed (1115/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1878357 - already processed (1116/2596) 2025-11-20 14:03:17,998 [INFO] Skipping bill 1801087 - already processed (1117/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1878533 - already processed (1118/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1781971 - already processed (1119/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1836944 - already processed (1120/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1773855 - already processed (1121/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1774758 - already processed (1122/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1779189 - already processed (1123/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1780403 - already processed (1124/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1882902 - already processed (1125/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1761023 - already processed (1126/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1763282 - already processed (1127/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1756406 - already processed (1128/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1721336 - already processed (1129/2596) 2025-11-20 14:03:17,999 [INFO] Skipping bill 1865663 - already processed (1130/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1884682 - already processed (1131/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1879124 - already processed (1132/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1813023 - already processed (1133/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1780572 - already processed (1134/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1796023 - already processed (1135/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1796213 - already processed (1136/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1841005 - already processed (1137/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1861287 - already processed (1138/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1878752 - already processed (1139/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1813101 - already processed (1140/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1768635 - already processed (1141/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1767924 - already processed (1142/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1641754 - already processed (1143/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1882889 - already processed (1144/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1729291 - already processed (1145/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1773906 - already processed (1146/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1839957 - already processed (1147/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1843965 - already processed (1148/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1879710 - already processed (1149/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1763606 - already processed (1150/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1780432 - already processed (1151/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1812765 - already processed (1152/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1836858 - already processed (1153/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1864293 - already processed (1154/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1770114 - already processed (1155/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1733127 - already processed (1156/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1762026 - already processed (1157/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1829537 - already processed (1158/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1878142 - already processed (1159/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1880765 - already processed (1160/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1762041 - already processed (1161/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1646230 - already processed (1162/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1762213 - already processed (1163/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1779393 - already processed (1164/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1878544 - already processed (1165/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1780459 - already processed (1166/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1781963 - already processed (1167/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1758293 - already processed (1168/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1768495 - already processed (1169/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1773860 - already processed (1170/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1864226 - already processed (1171/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1878400 - already processed (1172/2596) 2025-11-20 14:03:18,000 [INFO] Skipping bill 1879652 - already processed (1173/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1865798 - already processed (1174/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1862795 - already processed (1175/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1710243 - already processed (1176/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1818495 - already processed (1177/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1775864 - already processed (1178/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1856196 - already processed (1179/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1791835 - already processed (1180/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1658709 - already processed (1181/2596) 2025-11-20 14:03:18,001 [INFO] Skipping bill 1695187 - already processed (1182/2596) 2025-11-20 14:03:18,001 [INFO] Processing 1183/2596: Bill ID 1818780 2025-11-20 14:03:18,521 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:18,523 [ERROR] Failed to generate report for bill 1818780: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:19,529 [INFO] Processing 1184/2596: Bill ID 1818766 2025-11-20 14:03:23,028 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:23,030 [ERROR] Failed to generate report for bill 1818766: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:24,038 [INFO] Skipping bill 1752559 - already processed (1185/2596) 2025-11-20 14:03:24,038 [INFO] Skipping bill 1882942 - already processed (1186/2596) 2025-11-20 14:03:24,038 [INFO] Skipping bill 1766908 - already processed (1187/2596) 2025-11-20 14:03:24,038 [INFO] Skipping bill 1691064 - already processed (1188/2596) 2025-11-20 14:03:24,038 [INFO] Processing 1189/2596: Bill ID 1690030 2025-11-20 14:03:25,690 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:25,692 [ERROR] Failed to generate report for bill 1690030: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:26,702 [INFO] Processing 1190/2596: Bill ID 1690727 2025-11-20 14:03:28,044 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:28,046 [ERROR] Failed to generate report for bill 1690727: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:28,088 [INFO] Saved 2569 reports to data/bill_reports.json 2025-11-20 14:03:28,089 [INFO] Progress: 1190/2596 - Processed: 5, Skipped: 1134, Errors: 51 2025-11-20 14:03:29,094 [INFO] Processing 1191/2596: Bill ID 1875409 2025-11-20 14:03:32,249 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:32,253 [ERROR] Failed to generate report for bill 1875409: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:33,262 [INFO] Processing 1192/2596: Bill ID 1835820 2025-11-20 14:03:37,160 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:37,162 [ERROR] Failed to generate report for bill 1835820: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:38,171 [INFO] Processing 1193/2596: Bill ID 1818459 2025-11-20 14:03:40,948 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:03:40,950 [ERROR] Failed to generate report for bill 1818459: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:03:41,961 [INFO] Skipping bill 2009915 - already processed (1194/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 1917775 - already processed (1195/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 1902981 - already processed (1196/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 1908626 - already processed (1197/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 1903647 - already processed (1198/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 1993863 - already processed (1199/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 2015656 - already processed (1200/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 1909120 - already processed (1201/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 2032707 - already processed (1202/2596) 2025-11-20 14:03:41,962 [INFO] Skipping bill 2030838 - already processed (1203/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 2033110 - already processed (1204/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 1992712 - already processed (1205/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 2010112 - already processed (1206/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 2035218 - already processed (1207/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 1970759 - already processed (1208/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 1917262 - already processed (1209/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 2015645 - already processed (1210/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 1941920 - already processed (1211/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 2041695 - already processed (1212/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 2038940 - already processed (1213/2596) 2025-11-20 14:03:41,963 [INFO] Skipping bill 2043998 - already processed (1214/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 1903496 - already processed (1215/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 1942114 - already processed (1216/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 1948978 - already processed (1217/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 2025948 - already processed (1218/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 2030449 - already processed (1219/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 2012463 - already processed (1220/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 2036382 - already processed (1221/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 1901571 - already processed (1222/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 1902589 - already processed (1223/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 2045075 - already processed (1224/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 2042397 - already processed (1225/2596) 2025-11-20 14:03:41,964 [INFO] Skipping bill 2005892 - already processed (1226/2596) 2025-11-20 14:03:41,965 [INFO] Skipping bill 1995988 - already processed (1227/2596) 2025-11-20 14:03:41,965 [INFO] Skipping bill 1941987 - already processed (1228/2596) 2025-11-20 14:03:41,965 [INFO] Processing 1229/2596: Bill ID 2051432 2025-11-20 14:04:01,222 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:04:01,227 [INFO] Skipping bill 2030765 - already processed (1230/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 1900450 - already processed (1231/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 2032658 - already processed (1232/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 1934862 - already processed (1233/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 1954914 - already processed (1234/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 1908970 - already processed (1235/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 2046810 - already processed (1236/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 1911503 - already processed (1237/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 1917449 - already processed (1238/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 2012421 - already processed (1239/2596) 2025-11-20 14:04:01,227 [INFO] Skipping bill 2036409 - already processed (1240/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 1930912 - already processed (1241/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 2015571 - already processed (1242/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 1991849 - already processed (1243/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 1909237 - already processed (1244/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 1907396 - already processed (1245/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 2032681 - already processed (1246/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 2031449 - already processed (1247/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 2036417 - already processed (1248/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 2010242 - already processed (1249/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 1902485 - already processed (1250/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 2044029 - already processed (1251/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 2039479 - already processed (1252/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 1993679 - already processed (1253/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 1927014 - already processed (1254/2596) 2025-11-20 14:04:01,228 [INFO] Skipping bill 2012390 - already processed (1255/2596) 2025-11-20 14:04:01,228 [INFO] Processing 1256/2596: Bill ID 2051443 2025-11-20 14:04:14,391 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:04:14,393 [INFO] Skipping bill 1967476 - already processed (1257/2596) 2025-11-20 14:04:14,393 [INFO] Skipping bill 2039584 - already processed (1258/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1941925 - already processed (1259/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 2039602 - already processed (1260/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 2021091 - already processed (1261/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1993748 - already processed (1262/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1907408 - already processed (1263/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 2043429 - already processed (1264/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 2036445 - already processed (1265/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1948575 - already processed (1266/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 2020539 - already processed (1267/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1941981 - already processed (1268/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1985057 - already processed (1269/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 2012554 - already processed (1270/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1900469 - already processed (1271/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1949091 - already processed (1272/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1903302 - already processed (1273/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 2031820 - already processed (1274/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1986509 - already processed (1275/2596) 2025-11-20 14:04:14,394 [INFO] Skipping bill 1992147 - already processed (1276/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1908565 - already processed (1277/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 2018195 - already processed (1278/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1948655 - already processed (1279/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1926957 - already processed (1280/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 2007650 - already processed (1281/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1938062 - already processed (1282/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1909167 - already processed (1283/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1910683 - already processed (1284/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1918276 - already processed (1285/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1942634 - already processed (1286/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1947885 - already processed (1287/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 2034828 - already processed (1288/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 2035534 - already processed (1289/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1937370 - already processed (1290/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 2036328 - already processed (1291/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1940048 - already processed (1292/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1990212 - already processed (1293/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1995017 - already processed (1294/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1937257 - already processed (1295/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1900853 - already processed (1296/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1947971 - already processed (1297/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1920984 - already processed (1298/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1902725 - already processed (1299/2596) 2025-11-20 14:04:14,395 [INFO] Skipping bill 1964016 - already processed (1300/2596) 2025-11-20 14:04:14,395 [INFO] Processing 1301/2596: Bill ID 1934576 2025-11-20 14:04:14,854 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:14,855 [ERROR] Failed to generate report for bill 1934576: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:15,864 [INFO] Skipping bill 1898800 - already processed (1302/2596) 2025-11-20 14:04:15,864 [INFO] Skipping bill 1971511 - already processed (1303/2596) 2025-11-20 14:04:15,865 [INFO] Processing 1304/2596: Bill ID 1935197 2025-11-20 14:04:16,376 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:16,377 [ERROR] Failed to generate report for bill 1935197: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:17,385 [INFO] Processing 1305/2596: Bill ID 1935040 2025-11-20 14:04:18,019 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:18,022 [ERROR] Failed to generate report for bill 1935040: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:19,030 [INFO] Skipping bill 1948521 - already processed (1306/2596) 2025-11-20 14:04:19,030 [INFO] Skipping bill 1977652 - already processed (1307/2596) 2025-11-20 14:04:19,030 [INFO] Processing 1308/2596: Bill ID 1934805 2025-11-20 14:04:19,551 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:19,553 [ERROR] Failed to generate report for bill 1934805: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:20,561 [INFO] Skipping bill 1934970 - already processed (1309/2596) 2025-11-20 14:04:20,561 [INFO] Skipping bill 1934701 - already processed (1310/2596) 2025-11-20 14:04:20,561 [INFO] Skipping bill 1942260 - already processed (1311/2596) 2025-11-20 14:04:20,561 [INFO] Skipping bill 1917391 - already processed (1312/2596) 2025-11-20 14:04:20,561 [INFO] Processing 1313/2596: Bill ID 1935190 2025-11-20 14:04:23,548 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:23,555 [ERROR] Failed to generate report for bill 1935190: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:24,563 [INFO] Processing 1314/2596: Bill ID 1934636 2025-11-20 14:04:26,411 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:26,411 [ERROR] Failed to generate report for bill 1934636: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:27,418 [INFO] Processing 1315/2596: Bill ID 1935223 2025-11-20 14:04:29,486 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:29,488 [ERROR] Failed to generate report for bill 1935223: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:30,497 [INFO] Processing 1316/2596: Bill ID 1934824 2025-11-20 14:04:33,377 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:33,380 [ERROR] Failed to generate report for bill 1934824: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:34,389 [INFO] Processing 1317/2596: Bill ID 2052596 2025-11-20 14:04:39,012 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:39,013 [ERROR] Failed to generate report for bill 2052596: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:40,022 [INFO] Skipping bill 1879932 - already processed (1318/2596) 2025-11-20 14:04:40,023 [INFO] Skipping bill 1875738 - already processed (1319/2596) 2025-11-20 14:04:40,023 [INFO] Skipping bill 1875815 - already processed (1320/2596) 2025-11-20 14:04:40,024 [INFO] Skipping bill 1701253 - already processed (1321/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1875615 - already processed (1322/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1754315 - already processed (1323/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1751005 - already processed (1324/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1875642 - already processed (1325/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1753811 - already processed (1326/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1752050 - already processed (1327/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1704591 - already processed (1328/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1748551 - already processed (1329/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1725321 - already processed (1330/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 1725195 - already processed (1331/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 2014434 - already processed (1332/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 2014277 - already processed (1333/2596) 2025-11-20 14:04:40,025 [INFO] Skipping bill 2000124 - already processed (1334/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2022736 - already processed (1335/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2022881 - already processed (1336/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2014322 - already processed (1337/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2014068 - already processed (1338/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2005730 - already processed (1339/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2014594 - already processed (1340/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2013131 - already processed (1341/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2022220 - already processed (1342/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2008986 - already processed (1343/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2013796 - already processed (1344/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2014312 - already processed (1345/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2013903 - already processed (1346/2596) 2025-11-20 14:04:40,026 [INFO] Skipping bill 2013936 - already processed (1347/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2013868 - already processed (1348/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2014024 - already processed (1349/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2014377 - already processed (1350/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2017695 - already processed (1351/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2018632 - already processed (1352/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2022666 - already processed (1353/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2022828 - already processed (1354/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2015551 - already processed (1355/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2009244 - already processed (1356/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 1969116 - already processed (1357/2596) 2025-11-20 14:04:40,027 [INFO] Skipping bill 2009761 - already processed (1358/2596) 2025-11-20 14:04:40,027 [INFO] Processing 1359/2596: Bill ID 2012916 2025-11-20 14:04:40,545 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:40,546 [ERROR] Failed to generate report for bill 2012916: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:41,555 [INFO] Skipping bill 1996111 - already processed (1360/2596) 2025-11-20 14:04:41,555 [INFO] Skipping bill 1656324 - already processed (1361/2596) 2025-11-20 14:04:41,555 [INFO] Skipping bill 1640560 - already processed (1362/2596) 2025-11-20 14:04:41,555 [INFO] Skipping bill 1644790 - already processed (1363/2596) 2025-11-20 14:04:41,555 [INFO] Skipping bill 1908973 - already processed (1364/2596) 2025-11-20 14:04:41,555 [INFO] Skipping bill 1930471 - already processed (1365/2596) 2025-11-20 14:04:41,555 [INFO] Skipping bill 1916131 - already processed (1366/2596) 2025-11-20 14:04:41,556 [INFO] Skipping bill 1916897 - already processed (1367/2596) 2025-11-20 14:04:41,556 [INFO] Skipping bill 1930219 - already processed (1368/2596) 2025-11-20 14:04:41,556 [INFO] Skipping bill 1916725 - already processed (1369/2596) 2025-11-20 14:04:41,556 [INFO] Skipping bill 1916697 - already processed (1370/2596) 2025-11-20 14:04:41,556 [INFO] Skipping bill 1921549 - already processed (1371/2596) 2025-11-20 14:04:41,556 [INFO] Skipping bill 1916032 - already processed (1372/2596) 2025-11-20 14:04:41,556 [INFO] Skipping bill 1915939 - already processed (1373/2596) 2025-11-20 14:04:41,557 [INFO] Skipping bill 1899315 - already processed (1374/2596) 2025-11-20 14:04:41,557 [INFO] Skipping bill 1930747 - already processed (1375/2596) 2025-11-20 14:04:41,557 [INFO] Skipping bill 1898936 - already processed (1376/2596) 2025-11-20 14:04:41,557 [INFO] Skipping bill 1828241 - already processed (1377/2596) 2025-11-20 14:04:41,557 [INFO] Skipping bill 1784887 - already processed (1378/2596) 2025-11-20 14:04:41,557 [INFO] Processing 1379/2596: Bill ID 1710984 2025-11-20 14:04:46,791 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:46,793 [ERROR] Failed to generate report for bill 1710984: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:47,805 [INFO] Processing 1380/2596: Bill ID 1710996 2025-11-20 14:04:50,943 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:50,945 [ERROR] Failed to generate report for bill 1710996: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:51,000 [INFO] Saved 2572 reports to data/bill_reports.json 2025-11-20 14:04:51,001 [INFO] Progress: 1380/2596 - Processed: 7, Skipped: 1307, Errors: 66 2025-11-20 14:04:52,006 [INFO] Processing 1381/2596: Bill ID 1659671 2025-11-20 14:04:54,676 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:04:54,678 [ERROR] Failed to generate report for bill 1659671: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:04:55,689 [INFO] Skipping bill 2046561 - already processed (1382/2596) 2025-11-20 14:04:55,689 [INFO] Skipping bill 2018937 - already processed (1383/2596) 2025-11-20 14:04:55,691 [INFO] Skipping bill 2046538 - already processed (1384/2596) 2025-11-20 14:04:55,691 [INFO] Skipping bill 2038933 - already processed (1385/2596) 2025-11-20 14:04:55,691 [INFO] Skipping bill 2019064 - already processed (1386/2596) 2025-11-20 14:04:55,691 [INFO] Processing 1387/2596: Bill ID 2051853 2025-11-20 14:05:37,068 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:05:37,071 [INFO] Skipping bill 1973495 - already processed (1388/2596) 2025-11-20 14:05:37,072 [INFO] Skipping bill 2044900 - already processed (1389/2596) 2025-11-20 14:05:37,072 [INFO] Skipping bill 2036911 - already processed (1390/2596) 2025-11-20 14:05:37,072 [INFO] Skipping bill 1956347 - already processed (1391/2596) 2025-11-20 14:05:37,072 [INFO] Skipping bill 2015680 - already processed (1392/2596) 2025-11-20 14:05:37,072 [INFO] Skipping bill 2035837 - already processed (1393/2596) 2025-11-20 14:05:37,072 [INFO] Processing 1394/2596: Bill ID 2052361 2025-11-20 14:06:09,258 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:06:09,265 [INFO] Processing 1395/2596: Bill ID 2053186 2025-11-20 14:06:33,490 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:06:33,494 [INFO] Processing 1396/2596: Bill ID 1956501 2025-11-20 14:06:46,779 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:06:46,785 [INFO] Processing 1397/2596: Bill ID 1966320 2025-11-20 14:06:52,641 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:06:52,642 [ERROR] Failed to generate report for bill 1966320: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:06:53,652 [INFO] Processing 1398/2596: Bill ID 2044413 2025-11-20 14:06:54,483 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:06:54,484 [ERROR] Failed to generate report for bill 2044413: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281184 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281184 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:06:55,493 [INFO] Processing 1399/2596: Bill ID 2031116 2025-11-20 14:06:56,476 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:06:56,477 [ERROR] Failed to generate report for bill 2031116: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:06:57,481 [INFO] Skipping bill 1820171 - already processed (1400/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1820684 - already processed (1401/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1820075 - already processed (1402/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1820478 - already processed (1403/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1820697 - already processed (1404/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1821348 - already processed (1405/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1819421 - already processed (1406/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1820795 - already processed (1407/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1814318 - already processed (1408/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1814441 - already processed (1409/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1791289 - already processed (1410/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1789468 - already processed (1411/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1924199 - already processed (1412/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1920208 - already processed (1413/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1920320 - already processed (1414/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1923586 - already processed (1415/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1918327 - already processed (1416/2596) 2025-11-20 14:06:57,481 [INFO] Skipping bill 1922702 - already processed (1417/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1923122 - already processed (1418/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1924269 - already processed (1419/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1925220 - already processed (1420/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1924640 - already processed (1421/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1924912 - already processed (1422/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1900252 - already processed (1423/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 2018241 - already processed (1424/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1920876 - already processed (1425/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1920720 - already processed (1426/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1925546 - already processed (1427/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1903378 - already processed (1428/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1921990 - already processed (1429/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1922805 - already processed (1430/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1922842 - already processed (1431/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1836006 - already processed (1432/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1836109 - already processed (1433/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1843504 - already processed (1434/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1973003 - already processed (1435/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 2009609 - already processed (1436/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1986214 - already processed (1437/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1912749 - already processed (1438/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1914095 - already processed (1439/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1914598 - already processed (1440/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1913104 - already processed (1441/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1914569 - already processed (1442/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1930373 - already processed (1443/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1982090 - already processed (1444/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1914274 - already processed (1445/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1982120 - already processed (1446/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1773806 - already processed (1447/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1880673 - already processed (1448/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1724997 - already processed (1449/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1775230 - already processed (1450/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1889846 - already processed (1451/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1773451 - already processed (1452/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1759469 - already processed (1453/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1777407 - already processed (1454/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1880554 - already processed (1455/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1854268 - already processed (1456/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1771135 - already processed (1457/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1830478 - already processed (1458/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1780085 - already processed (1459/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1858003 - already processed (1460/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1880735 - already processed (1461/2596) 2025-11-20 14:06:57,482 [INFO] Skipping bill 1882950 - already processed (1462/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1878925 - already processed (1463/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1878252 - already processed (1464/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1884263 - already processed (1465/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1873862 - already processed (1466/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1882265 - already processed (1467/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1771247 - already processed (1468/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1836612 - already processed (1469/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1820748 - already processed (1470/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1886418 - already processed (1471/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1769931 - already processed (1472/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1740020 - already processed (1473/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1878961 - already processed (1474/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 1768592 - already processed (1475/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 2045757 - already processed (1476/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 2030536 - already processed (1477/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 2047301 - already processed (1478/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 2039357 - already processed (1479/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 2034685 - already processed (1480/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 2037642 - already processed (1481/2596) 2025-11-20 14:06:57,483 [INFO] Skipping bill 2022168 - already processed (1482/2596) 2025-11-20 14:06:57,483 [INFO] Processing 1483/2596: Bill ID 2052644 2025-11-20 14:07:13,222 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:07:13,224 [INFO] Processing 1484/2596: Bill ID 2051282 2025-11-20 14:07:27,096 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:07:27,097 [INFO] Skipping bill 1937863 - already processed (1485/2596) 2025-11-20 14:07:27,097 [INFO] Skipping bill 2043639 - already processed (1486/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2012593 - already processed (1487/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 1991206 - already processed (1488/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 1947924 - already processed (1489/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2012408 - already processed (1490/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2021116 - already processed (1491/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 1973751 - already processed (1492/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2045246 - already processed (1493/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 1910852 - already processed (1494/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 1956391 - already processed (1495/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2023404 - already processed (1496/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2035307 - already processed (1497/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 1944456 - already processed (1498/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2041064 - already processed (1499/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2039278 - already processed (1500/2596) 2025-11-20 14:07:27,098 [INFO] Skipping bill 2041823 - already processed (1501/2596) 2025-11-20 14:07:27,098 [INFO] Processing 1502/2596: Bill ID 1946034 2025-11-20 14:07:41,364 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:07:41,365 [INFO] Skipping bill 2038442 - already processed (1503/2596) 2025-11-20 14:07:41,365 [INFO] Skipping bill 1905925 - already processed (1504/2596) 2025-11-20 14:07:41,365 [INFO] Processing 1505/2596: Bill ID 2041076 2025-11-20 14:07:41,892 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:41,893 [ERROR] Failed to generate report for bill 2041076: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:42,907 [INFO] Processing 1506/2596: Bill ID 2037948 2025-11-20 14:07:43,430 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:43,431 [ERROR] Failed to generate report for bill 2037948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:44,439 [INFO] Skipping bill 1757100 - already processed (1507/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1766918 - already processed (1508/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1691606 - already processed (1509/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1757087 - already processed (1510/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1691984 - already processed (1511/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1724146 - already processed (1512/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1811367 - already processed (1513/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1864559 - already processed (1514/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1833383 - already processed (1515/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1839979 - already processed (1516/2596) 2025-11-20 14:07:44,440 [INFO] Skipping bill 1863636 - already processed (1517/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1866932 - already processed (1518/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1829566 - already processed (1519/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1858179 - already processed (1520/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1857154 - already processed (1521/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1866872 - already processed (1522/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1844272 - already processed (1523/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1875576 - already processed (1524/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1875933 - already processed (1525/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1844730 - already processed (1526/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1858971 - already processed (1527/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1870027 - already processed (1528/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1994761 - already processed (1529/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1935080 - already processed (1530/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1945535 - already processed (1531/2596) 2025-11-20 14:07:44,441 [INFO] Skipping bill 1979504 - already processed (1532/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1937835 - already processed (1533/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1918971 - already processed (1534/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1986390 - already processed (1535/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1945988 - already processed (1536/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1940828 - already processed (1537/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1986602 - already processed (1538/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1988979 - already processed (1539/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 2008057 - already processed (1540/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1986556 - already processed (1541/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1986569 - already processed (1542/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1988788 - already processed (1543/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 2028551 - already processed (1544/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1937524 - already processed (1545/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1966994 - already processed (1546/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 2030023 - already processed (1547/2596) 2025-11-20 14:07:44,442 [INFO] Skipping bill 1988713 - already processed (1548/2596) 2025-11-20 14:07:44,443 [INFO] Skipping bill 1988914 - already processed (1549/2596) 2025-11-20 14:07:44,443 [INFO] Skipping bill 2030055 - already processed (1550/2596) 2025-11-20 14:07:44,443 [INFO] Skipping bill 1666116 - already processed (1551/2596) 2025-11-20 14:07:44,443 [INFO] Skipping bill 1792231 - already processed (1552/2596) 2025-11-20 14:07:44,443 [INFO] Skipping bill 1802681 - already processed (1553/2596) 2025-11-20 14:07:44,443 [INFO] Skipping bill 1921522 - already processed (1554/2596) 2025-11-20 14:07:44,443 [INFO] Skipping bill 1999928 - already processed (1555/2596) 2025-11-20 14:07:44,443 [INFO] Skipping bill 2022730 - already processed (1556/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 2024009 - already processed (1557/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1895318 - already processed (1558/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1944028 - already processed (1559/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1954350 - already processed (1560/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1954733 - already processed (1561/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 2029172 - already processed (1562/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1944096 - already processed (1563/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1895182 - already processed (1564/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1919972 - already processed (1565/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1895637 - already processed (1566/2596) 2025-11-20 14:07:44,447 [INFO] Skipping bill 1819620 - already processed (1567/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1811138 - already processed (1568/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1948251 - already processed (1569/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1901594 - already processed (1570/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1833554 - already processed (1571/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1833050 - already processed (1572/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1830912 - already processed (1573/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1834207 - already processed (1574/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1795187 - already processed (1575/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1828458 - already processed (1576/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1808304 - already processed (1577/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1834240 - already processed (1578/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1831671 - already processed (1579/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1832378 - already processed (1580/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1828742 - already processed (1581/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1833429 - already processed (1582/2596) 2025-11-20 14:07:44,448 [INFO] Skipping bill 1828784 - already processed (1583/2596) 2025-11-20 14:07:44,449 [INFO] Skipping bill 1825620 - already processed (1584/2596) 2025-11-20 14:07:44,449 [INFO] Skipping bill 1799785 - already processed (1585/2596) 2025-11-20 14:07:44,449 [INFO] Skipping bill 1832466 - already processed (1586/2596) 2025-11-20 14:07:44,449 [INFO] Skipping bill 1831669 - already processed (1587/2596) 2025-11-20 14:07:44,449 [INFO] Skipping bill 1832147 - already processed (1588/2596) 2025-11-20 14:07:44,449 [INFO] Skipping bill 1831971 - already processed (1589/2596) 2025-11-20 14:07:44,449 [INFO] Skipping bill 1832437 - already processed (1590/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1828244 - already processed (1591/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1833731 - already processed (1592/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1833264 - already processed (1593/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1833393 - already processed (1594/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1825869 - already processed (1595/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1825916 - already processed (1596/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1873399 - already processed (1597/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1826595 - already processed (1598/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1832185 - already processed (1599/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1832434 - already processed (1600/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1831535 - already processed (1601/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1834179 - already processed (1602/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1834106 - already processed (1603/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1946381 - already processed (1604/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1953992 - already processed (1605/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1948149 - already processed (1606/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1959470 - already processed (1607/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1946783 - already processed (1608/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1955110 - already processed (1609/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1959302 - already processed (1610/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1959458 - already processed (1611/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1960722 - already processed (1612/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1951003 - already processed (1613/2596) 2025-11-20 14:07:44,450 [INFO] Skipping bill 1954702 - already processed (1614/2596) 2025-11-20 14:07:44,451 [INFO] Skipping bill 1954311 - already processed (1615/2596) 2025-11-20 14:07:44,451 [INFO] Skipping bill 1959312 - already processed (1616/2596) 2025-11-20 14:07:44,451 [INFO] Skipping bill 1959377 - already processed (1617/2596) 2025-11-20 14:07:44,451 [INFO] Skipping bill 1954015 - already processed (1618/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1954357 - already processed (1619/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1944274 - already processed (1620/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1944487 - already processed (1621/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1959723 - already processed (1622/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1960832 - already processed (1623/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1971015 - already processed (1624/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1971366 - already processed (1625/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1733375 - already processed (1626/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1700527 - already processed (1627/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1719413 - already processed (1628/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1694457 - already processed (1629/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1744060 - already processed (1630/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1727826 - already processed (1631/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1743424 - already processed (1632/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1732248 - already processed (1633/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1731629 - already processed (1634/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1769317 - already processed (1635/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1747471 - already processed (1636/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1747557 - already processed (1637/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1710763 - already processed (1638/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1782999 - already processed (1639/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1781207 - already processed (1640/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1726065 - already processed (1641/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1898826 - already processed (1642/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1992725 - already processed (1643/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1988473 - already processed (1644/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1970030 - already processed (1645/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 2007109 - already processed (1646/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1891805 - already processed (1647/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1949957 - already processed (1648/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1990181 - already processed (1649/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1991711 - already processed (1650/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1897779 - already processed (1651/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 2006851 - already processed (1652/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1975361 - already processed (1653/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1987235 - already processed (1654/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 2007736 - already processed (1655/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 2000200 - already processed (1656/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1923991 - already processed (1657/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1892858 - already processed (1658/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 2000248 - already processed (1659/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1971072 - already processed (1660/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 2008077 - already processed (1661/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1907668 - already processed (1662/2596) 2025-11-20 14:07:44,452 [INFO] Skipping bill 1962916 - already processed (1663/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2005286 - already processed (1664/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2005181 - already processed (1665/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1891063 - already processed (1666/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1900186 - already processed (1667/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1994657 - already processed (1668/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2008307 - already processed (1669/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1991260 - already processed (1670/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2006384 - already processed (1671/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2002051 - already processed (1672/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1973236 - already processed (1673/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2007316 - already processed (1674/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1890894 - already processed (1675/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2000178 - already processed (1676/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1982970 - already processed (1677/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2006497 - already processed (1678/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1890775 - already processed (1679/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1892224 - already processed (1680/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1954141 - already processed (1681/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2006579 - already processed (1682/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2006128 - already processed (1683/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2024097 - already processed (1684/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2034878 - already processed (1685/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1891396 - already processed (1686/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2040103 - already processed (1687/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2041986 - already processed (1688/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1987712 - already processed (1689/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2005998 - already processed (1690/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 2008318 - already processed (1691/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1892843 - already processed (1692/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1946392 - already processed (1693/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1971169 - already processed (1694/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1890786 - already processed (1695/2596) 2025-11-20 14:07:44,453 [INFO] Skipping bill 1891256 - already processed (1696/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1942882 - already processed (1697/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2031981 - already processed (1698/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2033602 - already processed (1699/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2034279 - already processed (1700/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1974704 - already processed (1701/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1950849 - already processed (1702/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1975022 - already processed (1703/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1981850 - already processed (1704/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1890492 - already processed (1705/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2020803 - already processed (1706/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2005343 - already processed (1707/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1890466 - already processed (1708/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1975612 - already processed (1709/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1994176 - already processed (1710/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1990550 - already processed (1711/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1891411 - already processed (1712/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1983542 - already processed (1713/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1999872 - already processed (1714/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2007449 - already processed (1715/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2039972 - already processed (1716/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1892428 - already processed (1717/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1891501 - already processed (1718/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2007840 - already processed (1719/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1976041 - already processed (1720/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1992763 - already processed (1721/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1993770 - already processed (1722/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 2007872 - already processed (1723/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1936766 - already processed (1724/2596) 2025-11-20 14:07:44,455 [INFO] Skipping bill 1676049 - already processed (1725/2596) 2025-11-20 14:07:44,455 [INFO] Processing 1726/2596: Bill ID 1704512 2025-11-20 14:07:44,949 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:44,972 [ERROR] Failed to generate report for bill 1704512: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:45,983 [INFO] Skipping bill 1828750 - already processed (1727/2596) 2025-11-20 14:07:45,983 [INFO] Skipping bill 1823594 - already processed (1728/2596) 2025-11-20 14:07:45,983 [INFO] Skipping bill 1820331 - already processed (1729/2596) 2025-11-20 14:07:45,983 [INFO] Skipping bill 1810219 - already processed (1730/2596) 2025-11-20 14:07:45,983 [INFO] Skipping bill 1813477 - already processed (1731/2596) 2025-11-20 14:07:45,983 [INFO] Skipping bill 1858814 - already processed (1732/2596) 2025-11-20 14:07:45,983 [INFO] Skipping bill 1882805 - already processed (1733/2596) 2025-11-20 14:07:45,984 [INFO] Skipping bill 1811586 - already processed (1734/2596) 2025-11-20 14:07:45,984 [INFO] Skipping bill 1794392 - already processed (1735/2596) 2025-11-20 14:07:45,984 [INFO] Processing 1736/2596: Bill ID 1844899 2025-11-20 14:07:46,503 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:46,504 [ERROR] Failed to generate report for bill 1844899: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:47,512 [INFO] Skipping bill 1954171 - already processed (1737/2596) 2025-11-20 14:07:47,512 [INFO] Skipping bill 1911041 - already processed (1738/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1963098 - already processed (1739/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1943827 - already processed (1740/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1968353 - already processed (1741/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1981617 - already processed (1742/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1995499 - already processed (1743/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1954569 - already processed (1744/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1950395 - already processed (1745/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1989323 - already processed (1746/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1904576 - already processed (1747/2596) 2025-11-20 14:07:47,513 [INFO] Skipping bill 1968434 - already processed (1748/2596) 2025-11-20 14:07:47,513 [INFO] Processing 1749/2596: Bill ID 2046115 2025-11-20 14:07:48,549 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:48,551 [ERROR] Failed to generate report for bill 2046115: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:49,560 [INFO] Skipping bill 1912099 - already processed (1750/2596) 2025-11-20 14:07:49,560 [INFO] Skipping bill 1946923 - already processed (1751/2596) 2025-11-20 14:07:49,560 [INFO] Processing 1752/2596: Bill ID 2046119 2025-11-20 14:07:50,286 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:50,288 [ERROR] Failed to generate report for bill 2046119: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:51,300 [INFO] Processing 1753/2596: Bill ID 1897901 2025-11-20 14:07:52,582 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:52,582 [ERROR] Failed to generate report for bill 1897901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:53,590 [INFO] Processing 1754/2596: Bill ID 1948482 2025-11-20 14:07:54,487 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:54,488 [ERROR] Failed to generate report for bill 1948482: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:55,495 [INFO] Skipping bill 1800317 - already processed (1755/2596) 2025-11-20 14:07:55,495 [INFO] Skipping bill 1800156 - already processed (1756/2596) 2025-11-20 14:07:55,495 [INFO] Skipping bill 1854552 - already processed (1757/2596) 2025-11-20 14:07:55,495 [INFO] Skipping bill 1680053 - already processed (1758/2596) 2025-11-20 14:07:55,495 [INFO] Skipping bill 1682772 - already processed (1759/2596) 2025-11-20 14:07:55,495 [INFO] Skipping bill 1737434 - already processed (1760/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1981655 - already processed (1761/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1982851 - already processed (1762/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1934587 - already processed (1763/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1981303 - already processed (1764/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1983676 - already processed (1765/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1969845 - already processed (1766/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1983355 - already processed (1767/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 2009795 - already processed (1768/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1973485 - already processed (1769/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1967494 - already processed (1770/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1973283 - already processed (1771/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1639846 - already processed (1772/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1646426 - already processed (1773/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1673591 - already processed (1774/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1639749 - already processed (1775/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1655379 - already processed (1776/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1630766 - already processed (1777/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1630878 - already processed (1778/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1630898 - already processed (1779/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1645265 - already processed (1780/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1650459 - already processed (1781/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1645172 - already processed (1782/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1630804 - already processed (1783/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1630761 - already processed (1784/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1652712 - already processed (1785/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1633968 - already processed (1786/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1644865 - already processed (1787/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1645061 - already processed (1788/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1809843 - already processed (1789/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1811981 - already processed (1790/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1812040 - already processed (1791/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1798563 - already processed (1792/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1807894 - already processed (1793/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1798580 - already processed (1794/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1800951 - already processed (1795/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1808295 - already processed (1796/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1799462 - already processed (1797/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1808024 - already processed (1798/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1807991 - already processed (1799/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1812376 - already processed (1800/2596) 2025-11-20 14:07:55,496 [INFO] Skipping bill 1822475 - already processed (1801/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1811644 - already processed (1802/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1794980 - already processed (1803/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1808264 - already processed (1804/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1801793 - already processed (1805/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1799221 - already processed (1806/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1822208 - already processed (1807/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1800673 - already processed (1808/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1809026 - already processed (1809/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1812182 - already processed (1810/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1886330 - already processed (1811/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1904645 - already processed (1812/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1911036 - already processed (1813/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1904674 - already processed (1814/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1901323 - already processed (1815/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1904347 - already processed (1816/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1925485 - already processed (1817/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1886222 - already processed (1818/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905613 - already processed (1819/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1912330 - already processed (1820/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1914968 - already processed (1821/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1925408 - already processed (1822/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1886065 - already processed (1823/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905445 - already processed (1824/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905965 - already processed (1825/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1886188 - already processed (1826/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905894 - already processed (1827/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1912145 - already processed (1828/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1927784 - already processed (1829/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1941702 - already processed (1830/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1929947 - already processed (1831/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905942 - already processed (1832/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1912012 - already processed (1833/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905698 - already processed (1834/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1886051 - already processed (1835/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1932239 - already processed (1836/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1932502 - already processed (1837/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1885937 - already processed (1838/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1900803 - already processed (1839/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905712 - already processed (1840/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905995 - already processed (1841/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1902641 - already processed (1842/2596) 2025-11-20 14:07:55,497 [INFO] Skipping bill 1905891 - already processed (1843/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1905860 - already processed (1844/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1908254 - already processed (1845/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1905920 - already processed (1846/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1886241 - already processed (1847/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1886007 - already processed (1848/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1896347 - already processed (1849/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1905982 - already processed (1850/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1898426 - already processed (1851/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1791614 - already processed (1852/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1792210 - already processed (1853/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1825997 - already processed (1854/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1792205 - already processed (1855/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1801141 - already processed (1856/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1796759 - already processed (1857/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1794124 - already processed (1858/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1680711 - already processed (1859/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1686234 - already processed (1860/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1813390 - already processed (1861/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1797745 - already processed (1862/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1810331 - already processed (1863/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1813358 - already processed (1864/2596) 2025-11-20 14:07:55,498 [INFO] Skipping bill 1657734 - already processed (1865/2596) 2025-11-20 14:07:55,498 [INFO] Processing 1866/2596: Bill ID 1644054 2025-11-20 14:07:56,740 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:56,741 [ERROR] Failed to generate report for bill 1644054: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:07:57,749 [INFO] Processing 1867/2596: Bill ID 1645282 2025-11-20 14:07:58,992 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:07:58,993 [ERROR] Failed to generate report for bill 1645282: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:00,004 [INFO] Processing 1868/2596: Bill ID 1644063 2025-11-20 14:08:00,631 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:00,631 [ERROR] Failed to generate report for bill 1644063: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:01,638 [INFO] Processing 1869/2596: Bill ID 1645384 2025-11-20 14:08:02,271 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:02,272 [ERROR] Failed to generate report for bill 1645384: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:03,287 [INFO] Processing 1870/2596: Bill ID 1645468 2025-11-20 14:08:03,996 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:03,997 [ERROR] Failed to generate report for bill 1645468: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:04,048 [INFO] Saved 2579 reports to data/bill_reports.json 2025-11-20 14:08:04,049 [INFO] Progress: 1870/2596 - Processed: 14, Skipped: 1773, Errors: 83 2025-11-20 14:08:05,054 [INFO] Processing 1871/2596: Bill ID 1796787 2025-11-20 14:08:06,197 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:06,200 [ERROR] Failed to generate report for bill 1796787: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:07,209 [INFO] Processing 1872/2596: Bill ID 1643905 2025-11-20 14:08:08,006 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:08,013 [ERROR] Failed to generate report for bill 1643905: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:09,024 [INFO] Processing 1873/2596: Bill ID 1796722 2025-11-20 14:08:10,138 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:10,139 [ERROR] Failed to generate report for bill 1796722: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:11,170 [INFO] Skipping bill 1952329 - already processed (1874/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1964254 - already processed (1875/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1904212 - already processed (1876/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1903879 - already processed (1877/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1930459 - already processed (1878/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1938736 - already processed (1879/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1941657 - already processed (1880/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1932498 - already processed (1881/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1898840 - already processed (1882/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1903962 - already processed (1883/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1943677 - already processed (1884/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1911202 - already processed (1885/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1898343 - already processed (1886/2596) 2025-11-20 14:08:11,171 [INFO] Skipping bill 1930701 - already processed (1887/2596) 2025-11-20 14:08:11,172 [INFO] Skipping bill 1911699 - already processed (1888/2596) 2025-11-20 14:08:11,172 [INFO] Skipping bill 1985707 - already processed (1889/2596) 2025-11-20 14:08:11,172 [INFO] Skipping bill 2025140 - already processed (1890/2596) 2025-11-20 14:08:11,172 [INFO] Processing 1891/2596: Bill ID 1916784 2025-11-20 14:08:11,896 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:11,897 [ERROR] Failed to generate report for bill 1916784: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:12,906 [INFO] Processing 1892/2596: Bill ID 1908012 2025-11-20 14:08:14,075 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:14,076 [ERROR] Failed to generate report for bill 1908012: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:15,086 [INFO] Processing 1893/2596: Bill ID 1907961 2025-11-20 14:08:16,408 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:16,410 [ERROR] Failed to generate report for bill 1907961: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:17,418 [INFO] Processing 1894/2596: Bill ID 1907826 2025-11-20 14:08:18,344 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:18,346 [ERROR] Failed to generate report for bill 1907826: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:19,359 [INFO] Processing 1895/2596: Bill ID 2023840 2025-11-20 14:08:21,250 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:21,252 [ERROR] Failed to generate report for bill 2023840: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:22,262 [INFO] Processing 1896/2596: Bill ID 1907778 2025-11-20 14:08:23,063 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:23,064 [ERROR] Failed to generate report for bill 1907778: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:24,075 [INFO] Skipping bill 1691917 - already processed (1897/2596) 2025-11-20 14:08:24,075 [INFO] Skipping bill 1695960 - already processed (1898/2596) 2025-11-20 14:08:24,075 [INFO] Skipping bill 1850601 - already processed (1899/2596) 2025-11-20 14:08:24,075 [INFO] Skipping bill 1838098 - already processed (1900/2596) 2025-11-20 14:08:24,075 [INFO] Skipping bill 1842521 - already processed (1901/2596) 2025-11-20 14:08:24,076 [INFO] Skipping bill 1809518 - already processed (1902/2596) 2025-11-20 14:08:24,076 [INFO] Skipping bill 1839623 - already processed (1903/2596) 2025-11-20 14:08:24,076 [INFO] Skipping bill 1836854 - already processed (1904/2596) 2025-11-20 14:08:24,076 [INFO] Skipping bill 1828203 - already processed (1905/2596) 2025-11-20 14:08:24,076 [INFO] Skipping bill 1823415 - already processed (1906/2596) 2025-11-20 14:08:24,076 [INFO] Processing 1907/2596: Bill ID 1809702 2025-11-20 14:08:25,240 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:25,241 [ERROR] Failed to generate report for bill 1809702: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:26,248 [INFO] Processing 1908/2596: Bill ID 1812739 2025-11-20 14:08:27,253 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:27,255 [ERROR] Failed to generate report for bill 1812739: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:28,262 [INFO] Skipping bill 1993190 - already processed (1909/2596) 2025-11-20 14:08:28,262 [INFO] Skipping bill 2009723 - already processed (1910/2596) 2025-11-20 14:08:28,262 [INFO] Skipping bill 1970932 - already processed (1911/2596) 2025-11-20 14:08:28,262 [INFO] Skipping bill 1990795 - already processed (1912/2596) 2025-11-20 14:08:28,262 [INFO] Skipping bill 1966877 - already processed (1913/2596) 2025-11-20 14:08:28,262 [INFO] Skipping bill 1972008 - already processed (1914/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1994548 - already processed (1915/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1991745 - already processed (1916/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 2010818 - already processed (1917/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 2003316 - already processed (1918/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 2021830 - already processed (1919/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 2009667 - already processed (1920/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 2011559 - already processed (1921/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1981081 - already processed (1922/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1990559 - already processed (1923/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1968858 - already processed (1924/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1841344 - already processed (1925/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1837111 - already processed (1926/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1783445 - already processed (1927/2596) 2025-11-20 14:08:28,263 [INFO] Skipping bill 1854251 - already processed (1928/2596) 2025-11-20 14:08:28,264 [INFO] Skipping bill 1867071 - already processed (1929/2596) 2025-11-20 14:08:28,264 [INFO] Skipping bill 1782940 - already processed (1930/2596) 2025-11-20 14:08:28,264 [INFO] Skipping bill 1780646 - already processed (1931/2596) 2025-11-20 14:08:28,264 [INFO] Skipping bill 1781005 - already processed (1932/2596) 2025-11-20 14:08:28,264 [INFO] Processing 1933/2596: Bill ID 1709614 2025-11-20 14:08:30,497 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:30,502 [ERROR] Failed to generate report for bill 1709614: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:31,512 [INFO] Processing 1934/2596: Bill ID 1709655 2025-11-20 14:08:34,427 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:34,429 [ERROR] Failed to generate report for bill 1709655: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:35,441 [INFO] Skipping bill 2034598 - already processed (1935/2596) 2025-11-20 14:08:35,442 [INFO] Skipping bill 2034722 - already processed (1936/2596) 2025-11-20 14:08:35,442 [INFO] Skipping bill 2038518 - already processed (1937/2596) 2025-11-20 14:08:35,442 [INFO] Skipping bill 2039752 - already processed (1938/2596) 2025-11-20 14:08:35,442 [INFO] Skipping bill 2044087 - already processed (1939/2596) 2025-11-20 14:08:35,442 [INFO] Skipping bill 2042614 - already processed (1940/2596) 2025-11-20 14:08:35,442 [INFO] Skipping bill 2045155 - already processed (1941/2596) 2025-11-20 14:08:35,442 [INFO] Skipping bill 2045662 - already processed (1942/2596) 2025-11-20 14:08:35,442 [INFO] Processing 1943/2596: Bill ID 1974122 2025-11-20 14:08:38,013 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:38,014 [ERROR] Failed to generate report for bill 1974122: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:39,020 [INFO] Processing 1944/2596: Bill ID 1974279 2025-11-20 14:08:41,388 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:08:41,389 [ERROR] Failed to generate report for bill 1974279: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:08:42,399 [INFO] Processing 1945/2596: Bill ID 2047792 2025-11-20 14:09:11,345 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:09:11,352 [INFO] Skipping bill 1842729 - already processed (1946/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1842887 - already processed (1947/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1939111 - already processed (1948/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1895001 - already processed (1949/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1945993 - already processed (1950/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1945813 - already processed (1951/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1774433 - already processed (1952/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1884990 - already processed (1953/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1882572 - already processed (1954/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1784131 - already processed (1955/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1873726 - already processed (1956/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1882205 - already processed (1957/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1860116 - already processed (1958/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1835790 - already processed (1959/2596) 2025-11-20 14:09:11,352 [INFO] Skipping bill 1835624 - already processed (1960/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1876647 - already processed (1961/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1887447 - already processed (1962/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1898165 - already processed (1963/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1780760 - already processed (1964/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1887744 - already processed (1965/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1782128 - already processed (1966/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1887739 - already processed (1967/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1885322 - already processed (1968/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1887646 - already processed (1969/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1897119 - already processed (1970/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1782539 - already processed (1971/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1880117 - already processed (1972/2596) 2025-11-20 14:09:11,353 [INFO] Skipping bill 1810734 - already processed (1973/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1887671 - already processed (1974/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1883053 - already processed (1975/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1861062 - already processed (1976/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1775461 - already processed (1977/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1792331 - already processed (1978/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1765384 - already processed (1979/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1863023 - already processed (1980/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1883034 - already processed (1981/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1886748 - already processed (1982/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1886756 - already processed (1983/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1885278 - already processed (1984/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1784087 - already processed (1985/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1886439 - already processed (1986/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1877586 - already processed (1987/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1888775 - already processed (1988/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1773844 - already processed (1989/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1857956 - already processed (1990/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1775721 - already processed (1991/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1861016 - already processed (1992/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1884504 - already processed (1993/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1892975 - already processed (1994/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1886714 - already processed (1995/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1877214 - already processed (1996/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1779520 - already processed (1997/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1882161 - already processed (1998/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1793734 - already processed (1999/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1885501 - already processed (2000/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1887169 - already processed (2001/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1877680 - already processed (2002/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1887282 - already processed (2003/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1774766 - already processed (2004/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1774961 - already processed (2005/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1866654 - already processed (2006/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1779127 - already processed (2007/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1882224 - already processed (2008/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1892198 - already processed (2009/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1759862 - already processed (2010/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1888377 - already processed (2011/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1894701 - already processed (2012/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1864751 - already processed (2013/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1772453 - already processed (2014/2596) 2025-11-20 14:09:11,354 [INFO] Skipping bill 1885309 - already processed (2015/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1886447 - already processed (2016/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1848736 - already processed (2017/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1884301 - already processed (2018/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1881976 - already processed (2019/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1885426 - already processed (2020/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1775334 - already processed (2021/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1884442 - already processed (2022/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1881980 - already processed (2023/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1893238 - already processed (2024/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1865594 - already processed (2025/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1872732 - already processed (2026/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1885341 - already processed (2027/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1764018 - already processed (2028/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1887315 - already processed (2029/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1751404 - already processed (2030/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1888249 - already processed (2031/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1885249 - already processed (2032/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1881398 - already processed (2033/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1866637 - already processed (2034/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1770194 - already processed (2035/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1775580 - already processed (2036/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1784705 - already processed (2037/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1831382 - already processed (2038/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1885274 - already processed (2039/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1892393 - already processed (2040/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1877691 - already processed (2041/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1776083 - already processed (2042/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1760978 - already processed (2043/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1764682 - already processed (2044/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1880344 - already processed (2045/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1886698 - already processed (2046/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1876488 - already processed (2047/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1765330 - already processed (2048/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1887359 - already processed (2049/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1771744 - already processed (2050/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1831359 - already processed (2051/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1774102 - already processed (2052/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1774479 - already processed (2053/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1794846 - already processed (2054/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1894867 - already processed (2055/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1774859 - already processed (2056/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1884522 - already processed (2057/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1866979 - already processed (2058/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1886705 - already processed (2059/2596) 2025-11-20 14:09:11,355 [INFO] Skipping bill 1898170 - already processed (2060/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1885330 - already processed (2061/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1792286 - already processed (2062/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1892877 - already processed (2063/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1884177 - already processed (2064/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1774713 - already processed (2065/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1774626 - already processed (2066/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1884513 - already processed (2067/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1887362 - already processed (2068/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1893236 - already processed (2069/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1883668 - already processed (2070/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1831371 - already processed (2071/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1885671 - already processed (2072/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1885535 - already processed (2073/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1888766 - already processed (2074/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1892506 - already processed (2075/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1892532 - already processed (2076/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1878820 - already processed (2077/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1884926 - already processed (2078/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1895881 - already processed (2079/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1778284 - already processed (2080/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1770920 - already processed (2081/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1650801 - already processed (2082/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1883378 - already processed (2083/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1683970 - already processed (2084/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1772792 - already processed (2085/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1759623 - already processed (2086/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1760525 - already processed (2087/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1862531 - already processed (2088/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1767461 - already processed (2089/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1776485 - already processed (2090/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1871231 - already processed (2091/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1887711 - already processed (2092/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1893243 - already processed (2093/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1701254 - already processed (2094/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1897456 - already processed (2095/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1775615 - already processed (2096/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1794843 - already processed (2097/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1810720 - already processed (2098/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1894308 - already processed (2099/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1894683 - already processed (2100/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1842456 - already processed (2101/2596) 2025-11-20 14:09:11,356 [INFO] Skipping bill 1885281 - already processed (2102/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1759897 - already processed (2103/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1860079 - already processed (2104/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1746098 - already processed (2105/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1897489 - already processed (2106/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1887287 - already processed (2107/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1885252 - already processed (2108/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1892936 - already processed (2109/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1732925 - already processed (2110/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1746069 - already processed (2111/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1774408 - already processed (2112/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1772182 - already processed (2113/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1884422 - already processed (2114/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1687118 - already processed (2115/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1784726 - already processed (2116/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1762912 - already processed (2117/2596) 2025-11-20 14:09:11,357 [INFO] Skipping bill 1898405 - already processed (2118/2596) 2025-11-20 14:09:11,357 [INFO] Processing 2119/2596: Bill ID 1884189 2025-11-20 14:09:12,657 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:12,657 [ERROR] Failed to generate report for bill 1884189: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:13,665 [INFO] Skipping bill 1899847 - already processed (2120/2596) 2025-11-20 14:09:13,665 [INFO] Skipping bill 1732984 - already processed (2121/2596) 2025-11-20 14:09:13,666 [INFO] Skipping bill 1746089 - already processed (2122/2596) 2025-11-20 14:09:13,666 [INFO] Skipping bill 1766726 - already processed (2123/2596) 2025-11-20 14:09:13,666 [INFO] Skipping bill 1769804 - already processed (2124/2596) 2025-11-20 14:09:13,666 [INFO] Skipping bill 1897097 - already processed (2125/2596) 2025-11-20 14:09:13,666 [INFO] Processing 2126/2596: Bill ID 1774177 2025-11-20 14:09:15,085 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:15,086 [ERROR] Failed to generate report for bill 1774177: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:16,096 [INFO] Skipping bill 1757049 - already processed (2127/2596) 2025-11-20 14:09:16,096 [INFO] Skipping bill 1784298 - already processed (2128/2596) 2025-11-20 14:09:16,096 [INFO] Skipping bill 1785108 - already processed (2129/2596) 2025-11-20 14:09:16,096 [INFO] Skipping bill 1772128 - already processed (2130/2596) 2025-11-20 14:09:16,096 [INFO] Skipping bill 1879910 - already processed (2131/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1777717 - already processed (2132/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1843401 - already processed (2133/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1774203 - already processed (2134/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1892268 - already processed (2135/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1774216 - already processed (2136/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1868870 - already processed (2137/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1770792 - already processed (2138/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1894823 - already processed (2139/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1885629 - already processed (2140/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1866980 - already processed (2141/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1826236 - already processed (2142/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1860115 - already processed (2143/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1767424 - already processed (2144/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1877069 - already processed (2145/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1865576 - already processed (2146/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1771076 - already processed (2147/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1755580 - already processed (2148/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1885029 - already processed (2149/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1770955 - already processed (2150/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1772617 - already processed (2151/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1760193 - already processed (2152/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1871212 - already processed (2153/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1887934 - already processed (2154/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1879177 - already processed (2155/2596) 2025-11-20 14:09:16,097 [INFO] Skipping bill 1897536 - already processed (2156/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1854133 - already processed (2157/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1761508 - already processed (2158/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1777284 - already processed (2159/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1774079 - already processed (2160/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1896271 - already processed (2161/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1897312 - already processed (2162/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1774750 - already processed (2163/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1873661 - already processed (2164/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1782516 - already processed (2165/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1782446 - already processed (2166/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1866649 - already processed (2167/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1866664 - already processed (2168/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1707867 - already processed (2169/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1872167 - already processed (2170/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1759875 - already processed (2171/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1789214 - already processed (2172/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1872153 - already processed (2173/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1760229 - already processed (2174/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1774942 - already processed (2175/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1694059 - already processed (2176/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1829219 - already processed (2177/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1679271 - already processed (2178/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1883365 - already processed (2179/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1780777 - already processed (2180/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1707919 - already processed (2181/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1860113 - already processed (2182/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1781933 - already processed (2183/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1751388 - already processed (2184/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1754500 - already processed (2185/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1772123 - already processed (2186/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1892924 - already processed (2187/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1778422 - already processed (2188/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1897294 - already processed (2189/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1769557 - already processed (2190/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1747003 - already processed (2191/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1775420 - already processed (2192/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1885460 - already processed (2193/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1778494 - already processed (2194/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1778507 - already processed (2195/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1746072 - already processed (2196/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1747808 - already processed (2197/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1764055 - already processed (2198/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1765960 - already processed (2199/2596) 2025-11-20 14:09:16,098 [INFO] Skipping bill 1766587 - already processed (2200/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1766736 - already processed (2201/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1771518 - already processed (2202/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1772577 - already processed (2203/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1772933 - already processed (2204/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1773303 - already processed (2205/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1775354 - already processed (2206/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1777649 - already processed (2207/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1783786 - already processed (2208/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1783927 - already processed (2209/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1791735 - already processed (2210/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1791984 - already processed (2211/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1860914 - already processed (2212/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1874964 - already processed (2213/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1876702 - already processed (2214/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1878298 - already processed (2215/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1878970 - already processed (2216/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1878883 - already processed (2217/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1880262 - already processed (2218/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1880301 - already processed (2219/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1880312 - already processed (2220/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1882770 - already processed (2221/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1889897 - already processed (2222/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1892711 - already processed (2223/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1897258 - already processed (2224/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1881528 - already processed (2225/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1782893 - already processed (2226/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1834554 - already processed (2227/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1774082 - already processed (2228/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1783631 - already processed (2229/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1879351 - already processed (2230/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1707921 - already processed (2231/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1872751 - already processed (2232/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1848738 - already processed (2233/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1882577 - already processed (2234/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1880072 - already processed (2235/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1880345 - already processed (2236/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1892804 - already processed (2237/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1860940 - already processed (2238/2596) 2025-11-20 14:09:16,099 [INFO] Skipping bill 1766003 - already processed (2239/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1775441 - already processed (2240/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1758619 - already processed (2241/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1894461 - already processed (2242/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1778171 - already processed (2243/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1778004 - already processed (2244/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1832839 - already processed (2245/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1774844 - already processed (2246/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1751449 - already processed (2247/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1751346 - already processed (2248/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1759080 - already processed (2249/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1882756 - already processed (2250/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1882766 - already processed (2251/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1887196 - already processed (2252/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1889949 - already processed (2253/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1887718 - already processed (2254/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1896232 - already processed (2255/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1783562 - already processed (2256/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1681772 - already processed (2257/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1871711 - already processed (2258/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1874986 - already processed (2259/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1772204 - already processed (2260/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1884912 - already processed (2261/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1888175 - already processed (2262/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1832721 - already processed (2263/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1887649 - already processed (2264/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1887704 - already processed (2265/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1881672 - already processed (2266/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1777454 - already processed (2267/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1882397 - already processed (2268/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1766671 - already processed (2269/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1775036 - already processed (2270/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1694305 - already processed (2271/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1863407 - already processed (2272/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1746051 - already processed (2273/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1882537 - already processed (2274/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1873551 - already processed (2275/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1762960 - already processed (2276/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1887303 - already processed (2277/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1887118 - already processed (2278/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1775679 - already processed (2279/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1882373 - already processed (2280/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1862520 - already processed (2281/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1886817 - already processed (2282/2596) 2025-11-20 14:09:16,100 [INFO] Skipping bill 1750558 - already processed (2283/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1750336 - already processed (2284/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1694173 - already processed (2285/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1864746 - already processed (2286/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1887915 - already processed (2287/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1774093 - already processed (2288/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1650659 - already processed (2289/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1694050 - already processed (2290/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1771092 - already processed (2291/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1876599 - already processed (2292/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1835788 - already processed (2293/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1782691 - already processed (2294/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1876668 - already processed (2295/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1729737 - already processed (2296/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1766627 - already processed (2297/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1885388 - already processed (2298/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1887130 - already processed (2299/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1775597 - already processed (2300/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1793999 - already processed (2301/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1789198 - already processed (2302/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1888330 - already processed (2303/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1882746 - already processed (2304/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1694182 - already processed (2305/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1860920 - already processed (2306/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1774448 - already processed (2307/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1774405 - already processed (2308/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1876990 - already processed (2309/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1876679 - already processed (2310/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1881973 - already processed (2311/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1717622 - already processed (2312/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1885510 - already processed (2313/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1871269 - already processed (2314/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1774266 - already processed (2315/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1785924 - already processed (2316/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1779428 - already processed (2317/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1775195 - already processed (2318/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1775134 - already processed (2319/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1743524 - already processed (2320/2596) 2025-11-20 14:09:16,101 [INFO] Skipping bill 1757473 - already processed (2321/2596) 2025-11-20 14:09:16,101 [INFO] Processing 2322/2596: Bill ID 1857970 2025-11-20 14:09:16,762 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:16,763 [ERROR] Failed to generate report for bill 1857970: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:17,769 [INFO] Skipping bill 1883678 - already processed (2323/2596) 2025-11-20 14:09:17,769 [INFO] Processing 2324/2596: Bill ID 1897245 2025-11-20 14:09:19,207 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:19,208 [ERROR] Failed to generate report for bill 1897245: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:20,221 [INFO] Skipping bill 1894517 - already processed (2325/2596) 2025-11-20 14:09:20,221 [INFO] Processing 2326/2596: Bill ID 1898241 2025-11-20 14:09:21,213 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:21,215 [ERROR] Failed to generate report for bill 1898241: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:22,225 [INFO] Processing 2327/2596: Bill ID 1879854 2025-11-20 14:09:23,195 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:23,196 [ERROR] Failed to generate report for bill 1879854: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:24,210 [INFO] Skipping bill 1888278 - already processed (2328/2596) 2025-11-20 14:09:24,210 [INFO] Skipping bill 1879169 - already processed (2329/2596) 2025-11-20 14:09:24,210 [INFO] Skipping bill 1860989 - already processed (2330/2596) 2025-11-20 14:09:24,210 [INFO] Skipping bill 1758024 - already processed (2331/2596) 2025-11-20 14:09:24,211 [INFO] Skipping bill 1863932 - already processed (2332/2596) 2025-11-20 14:09:24,242 [INFO] Processing 2333/2596: Bill ID 1771174 2025-11-20 14:09:25,057 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:25,058 [ERROR] Failed to generate report for bill 1771174: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:26,068 [INFO] Skipping bill 1772600 - already processed (2334/2596) 2025-11-20 14:09:26,068 [INFO] Skipping bill 1760911 - already processed (2335/2596) 2025-11-20 14:09:26,068 [INFO] Skipping bill 1789291 - already processed (2336/2596) 2025-11-20 14:09:26,068 [INFO] Skipping bill 1764694 - already processed (2337/2596) 2025-11-20 14:09:26,068 [INFO] Skipping bill 1764770 - already processed (2338/2596) 2025-11-20 14:09:26,069 [INFO] Skipping bill 1884949 - already processed (2339/2596) 2025-11-20 14:09:26,069 [INFO] Processing 2340/2596: Bill ID 1897528 2025-11-20 14:09:26,536 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:26,537 [ERROR] Failed to generate report for bill 1897528: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:26,585 [INFO] Saved 2580 reports to data/bill_reports.json 2025-11-20 14:09:26,585 [INFO] Progress: 2340/2596 - Processed: 15, Skipped: 2219, Errors: 106 2025-11-20 14:09:27,592 [INFO] Processing 2341/2596: Bill ID 1898192 2025-11-20 14:09:28,013 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:28,016 [ERROR] Failed to generate report for bill 1898192: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:29,025 [INFO] Skipping bill 1774988 - already processed (2342/2596) 2025-11-20 14:09:29,025 [INFO] Processing 2343/2596: Bill ID 1892419 2025-11-20 14:09:30,338 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:30,341 [ERROR] Failed to generate report for bill 1892419: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:31,350 [INFO] Processing 2344/2596: Bill ID 1884946 2025-11-20 14:09:33,096 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:33,097 [ERROR] Failed to generate report for bill 1884946: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:34,110 [INFO] Processing 2345/2596: Bill ID 1885067 2025-11-20 14:09:35,762 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:35,766 [ERROR] Failed to generate report for bill 1885067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:36,774 [INFO] Skipping bill 1879669 - already processed (2346/2596) 2025-11-20 14:09:36,775 [INFO] Processing 2347/2596: Bill ID 1897089 2025-11-20 14:09:37,399 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:09:37,400 [ERROR] Failed to generate report for bill 1897089: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:09:38,409 [INFO] Skipping bill 2041135 - already processed (2348/2596) 2025-11-20 14:09:38,410 [INFO] Skipping bill 2037217 - already processed (2349/2596) 2025-11-20 14:09:38,410 [INFO] Skipping bill 2022578 - already processed (2350/2596) 2025-11-20 14:09:38,410 [INFO] Skipping bill 2045360 - already processed (2351/2596) 2025-11-20 14:09:38,410 [INFO] Skipping bill 2044380 - already processed (2352/2596) 2025-11-20 14:09:38,413 [INFO] Skipping bill 2040591 - already processed (2353/2596) 2025-11-20 14:09:38,413 [INFO] Skipping bill 2044133 - already processed (2354/2596) 2025-11-20 14:09:38,413 [INFO] Skipping bill 2040128 - already processed (2355/2596) 2025-11-20 14:09:38,413 [INFO] Skipping bill 2022459 - already processed (2356/2596) 2025-11-20 14:09:38,413 [INFO] Skipping bill 2046890 - already processed (2357/2596) 2025-11-20 14:09:38,413 [INFO] Skipping bill 1987991 - already processed (2358/2596) 2025-11-20 14:09:38,413 [INFO] Skipping bill 1948171 - already processed (2359/2596) 2025-11-20 14:09:38,413 [INFO] Processing 2360/2596: Bill ID 2047758 2025-11-20 14:09:54,559 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:09:54,617 [INFO] Saved 2581 reports to data/bill_reports.json 2025-11-20 14:09:54,618 [INFO] Progress: 2360/2596 - Processed: 16, Skipped: 2233, Errors: 111 2025-11-20 14:09:54,618 [INFO] Skipping bill 2029224 - already processed (2361/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2044676 - already processed (2362/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2043072 - already processed (2363/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2041169 - already processed (2364/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2015628 - already processed (2365/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2029917 - already processed (2366/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2029601 - already processed (2367/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 1988067 - already processed (2368/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 1964814 - already processed (2369/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2043727 - already processed (2370/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 1988016 - already processed (2371/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2037684 - already processed (2372/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2029576 - already processed (2373/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2008640 - already processed (2374/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2042761 - already processed (2375/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2043628 - already processed (2376/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2039925 - already processed (2377/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 1990438 - already processed (2378/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2014950 - already processed (2379/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2046871 - already processed (2380/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2008541 - already processed (2381/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2019807 - already processed (2382/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2032195 - already processed (2383/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2032174 - already processed (2384/2596) 2025-11-20 14:09:54,618 [INFO] Skipping bill 2045181 - already processed (2385/2596) 2025-11-20 14:09:54,619 [INFO] Skipping bill 2035367 - already processed (2386/2596) 2025-11-20 14:09:54,619 [INFO] Skipping bill 2022504 - already processed (2387/2596) 2025-11-20 14:09:54,619 [INFO] Processing 2388/2596: Bill ID 2051717 2025-11-20 14:10:14,554 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:10:14,566 [INFO] Skipping bill 2040216 - already processed (2389/2596) 2025-11-20 14:10:14,566 [INFO] Skipping bill 2038243 - already processed (2390/2596) 2025-11-20 14:10:14,566 [INFO] Skipping bill 2038240 - already processed (2391/2596) 2025-11-20 14:10:14,566 [INFO] Skipping bill 1958579 - already processed (2392/2596) 2025-11-20 14:10:14,566 [INFO] Skipping bill 2041151 - already processed (2393/2596) 2025-11-20 14:10:14,566 [INFO] Skipping bill 2040068 - already processed (2394/2596) 2025-11-20 14:10:14,566 [INFO] Processing 2395/2596: Bill ID 2051901 2025-11-20 14:10:31,957 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:10:31,959 [INFO] Skipping bill 2035878 - already processed (2396/2596) 2025-11-20 14:10:31,959 [INFO] Skipping bill 2043698 - already processed (2397/2596) 2025-11-20 14:10:31,959 [INFO] Skipping bill 2043764 - already processed (2398/2596) 2025-11-20 14:10:31,960 [INFO] Processing 2399/2596: Bill ID 2047702 2025-11-20 14:10:49,502 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:10:49,512 [INFO] Skipping bill 2034541 - already processed (2400/2596) 2025-11-20 14:10:49,512 [INFO] Skipping bill 2036108 - already processed (2401/2596) 2025-11-20 14:10:49,512 [INFO] Processing 2402/2596: Bill ID 2052002 2025-11-20 14:11:06,000 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:11:06,005 [INFO] Skipping bill 2036914 - already processed (2403/2596) 2025-11-20 14:11:06,005 [INFO] Skipping bill 2032053 - already processed (2404/2596) 2025-11-20 14:11:06,005 [INFO] Skipping bill 2032068 - already processed (2405/2596) 2025-11-20 14:11:06,005 [INFO] Skipping bill 2045357 - already processed (2406/2596) 2025-11-20 14:11:06,005 [INFO] Skipping bill 2043047 - already processed (2407/2596) 2025-11-20 14:11:06,005 [INFO] Skipping bill 2040306 - already processed (2408/2596) 2025-11-20 14:11:06,005 [INFO] Skipping bill 1916986 - already processed (2409/2596) 2025-11-20 14:11:06,005 [INFO] Skipping bill 2039821 - already processed (2410/2596) 2025-11-20 14:11:06,005 [INFO] Processing 2411/2596: Bill ID 2047752 2025-11-20 14:11:21,252 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:11:21,257 [INFO] Skipping bill 2046891 - already processed (2412/2596) 2025-11-20 14:11:21,257 [INFO] Skipping bill 2040880 - already processed (2413/2596) 2025-11-20 14:11:21,257 [INFO] Skipping bill 2040851 - already processed (2414/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2043722 - already processed (2415/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 1987950 - already processed (2416/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2040439 - already processed (2417/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 1901865 - already processed (2418/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 1905283 - already processed (2419/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2042107 - already processed (2420/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 1986270 - already processed (2421/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2044713 - already processed (2422/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2041468 - already processed (2423/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 1983900 - already processed (2424/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2020217 - already processed (2425/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2038216 - already processed (2426/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2043604 - already processed (2427/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2045365 - already processed (2428/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2043961 - already processed (2429/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2044138 - already processed (2430/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2040354 - already processed (2431/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 1984221 - already processed (2432/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2033224 - already processed (2433/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2033186 - already processed (2434/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 1970505 - already processed (2435/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2036132 - already processed (2436/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2033542 - already processed (2437/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2027361 - already processed (2438/2596) 2025-11-20 14:11:21,258 [INFO] Skipping bill 2040866 - already processed (2439/2596) 2025-11-20 14:11:21,259 [INFO] Skipping bill 2043357 - already processed (2440/2596) 2025-11-20 14:11:21,259 [INFO] Skipping bill 2041757 - already processed (2441/2596) 2025-11-20 14:11:21,259 [INFO] Skipping bill 2042653 - already processed (2442/2596) 2025-11-20 14:11:21,259 [INFO] Skipping bill 2043161 - already processed (2443/2596) 2025-11-20 14:11:21,259 [INFO] Processing 2444/2596: Bill ID 2052989 2025-11-20 14:11:41,077 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:11:41,086 [INFO] Skipping bill 1965963 - already processed (2445/2596) 2025-11-20 14:11:41,086 [INFO] Skipping bill 2045735 - already processed (2446/2596) 2025-11-20 14:11:41,086 [INFO] Skipping bill 1999388 - already processed (2447/2596) 2025-11-20 14:11:41,086 [INFO] Processing 2448/2596: Bill ID 2051352 2025-11-20 14:11:56,950 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:11:56,954 [INFO] Processing 2449/2596: Bill ID 2039530 2025-11-20 14:11:58,647 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:11:58,648 [ERROR] Failed to generate report for bill 2039530: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:11:59,658 [INFO] Processing 2450/2596: Bill ID 2051886 2025-11-20 14:12:32,977 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:12:33,040 [INFO] Saved 2589 reports to data/bill_reports.json 2025-11-20 14:12:33,040 [INFO] Progress: 2450/2596 - Processed: 24, Skipped: 2314, Errors: 112 2025-11-20 14:12:33,040 [INFO] Skipping bill 1970493 - already processed (2451/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2037978 - already processed (2452/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2038111 - already processed (2453/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2040318 - already processed (2454/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2041104 - already processed (2455/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2043947 - already processed (2456/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 1982722 - already processed (2457/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2043896 - already processed (2458/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2012870 - already processed (2459/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2007066 - already processed (2460/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 1968860 - already processed (2461/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2029307 - already processed (2462/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2041255 - already processed (2463/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2043715 - already processed (2464/2596) 2025-11-20 14:12:33,040 [INFO] Skipping bill 2033191 - already processed (2465/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2036439 - already processed (2466/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1968282 - already processed (2467/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2039688 - already processed (2468/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2038212 - already processed (2469/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1987966 - already processed (2470/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2031847 - already processed (2471/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1970497 - already processed (2472/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1963353 - already processed (2473/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2046183 - already processed (2474/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2005587 - already processed (2475/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2039178 - already processed (2476/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2041269 - already processed (2477/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2043688 - already processed (2478/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1927158 - already processed (2479/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1987972 - already processed (2480/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2035895 - already processed (2481/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2037256 - already processed (2482/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2043043 - already processed (2483/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2031888 - already processed (2484/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2043344 - already processed (2485/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2043890 - already processed (2486/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1936780 - already processed (2487/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2022467 - already processed (2488/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2023141 - already processed (2489/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2022582 - already processed (2490/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1970488 - already processed (2491/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1988006 - already processed (2492/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1933954 - already processed (2493/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1955921 - already processed (2494/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 1963338 - already processed (2495/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2015697 - already processed (2496/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2020008 - already processed (2497/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2021940 - already processed (2498/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2022593 - already processed (2499/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2026569 - already processed (2500/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2027464 - already processed (2501/2596) 2025-11-20 14:12:33,041 [INFO] Skipping bill 2018800 - already processed (2502/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2028784 - already processed (2503/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2029580 - already processed (2504/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2031938 - already processed (2505/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2032128 - already processed (2506/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 1947775 - already processed (2507/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2035420 - already processed (2508/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2037229 - already processed (2509/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2039570 - already processed (2510/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2042103 - already processed (2511/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2043758 - already processed (2512/2596) 2025-11-20 14:12:33,042 [INFO] Skipping bill 2046719 - already processed (2513/2596) 2025-11-20 14:12:33,042 [INFO] Processing 2514/2596: Bill ID 2052024 2025-11-20 14:12:51,430 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:12:51,435 [INFO] Processing 2515/2596: Bill ID 2052050 2025-11-20 14:13:30,275 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:13:30,286 [INFO] Skipping bill 1979616 - already processed (2516/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2019782 - already processed (2517/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2017847 - already processed (2518/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2018869 - already processed (2519/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2040352 - already processed (2520/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2029980 - already processed (2521/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2018578 - already processed (2522/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2043696 - already processed (2523/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2008600 - already processed (2524/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2037247 - already processed (2525/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2037249 - already processed (2526/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2035609 - already processed (2527/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2038921 - already processed (2528/2596) 2025-11-20 14:13:30,286 [INFO] Skipping bill 2021715 - already processed (2529/2596) 2025-11-20 14:13:30,286 [INFO] Processing 2530/2596: Bill ID 2053374 2025-11-20 14:13:45,857 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:13:45,906 [INFO] Saved 2592 reports to data/bill_reports.json 2025-11-20 14:13:45,906 [INFO] Progress: 2530/2596 - Processed: 27, Skipped: 2391, Errors: 112 2025-11-20 14:13:45,907 [INFO] Skipping bill 2021641 - already processed (2531/2596) 2025-11-20 14:13:45,907 [INFO] Skipping bill 1901818 - already processed (2532/2596) 2025-11-20 14:13:45,907 [INFO] Skipping bill 2023062 - already processed (2533/2596) 2025-11-20 14:13:45,907 [INFO] Skipping bill 2044841 - already processed (2534/2596) 2025-11-20 14:13:45,907 [INFO] Skipping bill 2043173 - already processed (2535/2596) 2025-11-20 14:13:45,907 [INFO] Skipping bill 1948187 - already processed (2536/2596) 2025-11-20 14:13:45,907 [INFO] Skipping bill 2038257 - already processed (2537/2596) 2025-11-20 14:13:45,907 [INFO] Processing 2538/2596: Bill ID 2053144 2025-11-20 14:14:02,266 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:14:02,271 [INFO] Processing 2539/2596: Bill ID 2053381 2025-11-20 14:14:19,985 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:14:19,993 [INFO] Skipping bill 2037277 - already processed (2540/2596) 2025-11-20 14:14:19,993 [INFO] Skipping bill 1941772 - already processed (2541/2596) 2025-11-20 14:14:19,993 [INFO] Skipping bill 2043199 - already processed (2542/2596) 2025-11-20 14:14:19,993 [INFO] Skipping bill 2041162 - already processed (2543/2596) 2025-11-20 14:14:19,993 [INFO] Skipping bill 2038970 - already processed (2544/2596) 2025-11-20 14:14:19,993 [INFO] Skipping bill 2039918 - already processed (2545/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 2032140 - already processed (2546/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 2029941 - already processed (2547/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 2038420 - already processed (2548/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 1943770 - already processed (2549/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 1979653 - already processed (2550/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 1970677 - already processed (2551/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 1988332 - already processed (2552/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 1939613 - already processed (2553/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 2043104 - already processed (2554/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 2000425 - already processed (2555/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 2028805 - already processed (2556/2596) 2025-11-20 14:14:19,994 [INFO] Skipping bill 2023111 - already processed (2557/2596) 2025-11-20 14:14:19,994 [INFO] Processing 2558/2596: Bill ID 2032901 2025-11-20 14:14:20,945 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:14:20,946 [ERROR] Failed to generate report for bill 2032901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:14:21,953 [INFO] Processing 2559/2596: Bill ID 2051603 2025-11-20 14:14:46,045 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:14:46,053 [INFO] Skipping bill 2036437 - already processed (2560/2596) 2025-11-20 14:14:46,053 [INFO] Skipping bill 2036475 - already processed (2561/2596) 2025-11-20 14:14:46,053 [INFO] Skipping bill 2032059 - already processed (2562/2596) 2025-11-20 14:14:46,053 [INFO] Skipping bill 2007053 - already processed (2563/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 2000456 - already processed (2564/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 2016811 - already processed (2565/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 1958611 - already processed (2566/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 1926891 - already processed (2567/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 1943799 - already processed (2568/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 2039061 - already processed (2569/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 1961580 - already processed (2570/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 1927000 - already processed (2571/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 2023233 - already processed (2572/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 1947802 - already processed (2573/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 2022615 - already processed (2574/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 2022439 - already processed (2575/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 2033390 - already processed (2576/2596) 2025-11-20 14:14:46,054 [INFO] Skipping bill 2026636 - already processed (2577/2596) 2025-11-20 14:14:46,054 [INFO] Processing 2578/2596: Bill ID 2047438 2025-11-20 14:15:00,947 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-11-20 14:15:00,952 [INFO] Skipping bill 2036925 - already processed (2579/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 1963365 - already processed (2580/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 2043448 - already processed (2581/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 1994349 - already processed (2582/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 2023224 - already processed (2583/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 2028140 - already processed (2584/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 2032003 - already processed (2585/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 2039157 - already processed (2586/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 2044179 - already processed (2587/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 2035673 - already processed (2588/2596) 2025-11-20 14:15:00,952 [INFO] Skipping bill 2044473 - already processed (2589/2596) 2025-11-20 14:15:00,952 [INFO] Processing 2590/2596: Bill ID 1990400 2025-11-20 14:15:01,582 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:15:01,584 [ERROR] Failed to generate report for bill 1990400: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:15:01,630 [INFO] Saved 2596 reports to data/bill_reports.json 2025-11-20 14:15:01,630 [INFO] Progress: 2590/2596 - Processed: 31, Skipped: 2445, Errors: 114 2025-11-20 14:15:02,635 [INFO] Skipping bill 2027724 - already processed (2591/2596) 2025-11-20 14:15:02,636 [INFO] Processing 2592/2596: Bill ID 2028171 2025-11-20 14:15:03,020 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:15:03,023 [ERROR] Failed to generate report for bill 2028171: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134543 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134543 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:15:04,033 [INFO] Processing 2593/2596: Bill ID 1966444 2025-11-20 14:15:04,560 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:15:04,561 [ERROR] Failed to generate report for bill 1966444: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:15:05,575 [INFO] Processing 2594/2596: Bill ID 2038906 2025-11-20 14:15:06,096 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:15:06,097 [ERROR] Failed to generate report for bill 2038906: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:15:07,107 [INFO] Processing 2595/2596: Bill ID 1994544 2025-11-20 14:15:07,661 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-11-20 14:15:07,663 [ERROR] Failed to generate report for bill 1994544: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-11-20 14:15:08,668 [INFO] Skipping bill 2041289 - already processed (2596/2596) 2025-11-20 14:15:08,756 [INFO] Saved 2596 reports to data/bill_reports.json 2025-11-20 14:15:08,756 [INFO] Report generation complete! 2025-11-20 14:15:08,757 [INFO] Total bills: 2596 2025-11-20 14:15:08,757 [INFO] Successfully processed: 31 2025-11-20 14:15:08,757 [INFO] Skipped (already done): 2447 2025-11-20 14:15:08,757 [INFO] Errors: 118 2025-12-01 12:32:08,123 [INFO] Loaded 2596 existing reports from data/bill_reports.json 2025-12-01 12:32:08,123 [INFO] Starting report generation for 2605 bills 2025-12-01 12:32:08,123 [INFO] Skipping bill 1769530 - already processed (1/2605) 2025-12-01 12:32:08,123 [INFO] Skipping bill 1765118 - already processed (2/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1745017 - already processed (3/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1745230 - already processed (4/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1847915 - already processed (5/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1847210 - already processed (6/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1847980 - already processed (7/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1840627 - already processed (8/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1840340 - already processed (9/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 2019785 - already processed (10/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1983607 - already processed (11/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 2019702 - already processed (12/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1987220 - already processed (13/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 2022389 - already processed (14/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1959465 - already processed (15/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 2023982 - already processed (16/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 2019732 - already processed (17/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1969654 - already processed (18/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1956622 - already processed (19/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1957166 - already processed (20/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1869518 - already processed (21/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1813560 - already processed (22/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1836190 - already processed (23/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1851112 - already processed (24/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1745943 - already processed (25/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1737840 - already processed (26/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1814309 - already processed (27/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1851143 - already processed (28/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1984991 - already processed (29/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1912439 - already processed (30/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1912476 - already processed (31/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1940708 - already processed (32/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1935103 - already processed (33/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1685926 - already processed (34/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1657717 - already processed (35/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1683096 - already processed (36/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1828964 - already processed (37/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1830782 - already processed (38/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1829010 - already processed (39/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1810349 - already processed (40/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1810356 - already processed (41/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1804209 - already processed (42/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1830673 - already processed (43/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1923768 - already processed (44/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1935042 - already processed (45/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1948089 - already processed (46/2605) 2025-12-01 12:32:08,124 [INFO] Skipping bill 1917064 - already processed (47/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1964274 - already processed (48/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1949161 - already processed (49/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1938396 - already processed (50/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1955446 - already processed (51/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1946736 - already processed (52/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 2037727 - already processed (53/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1730253 - already processed (54/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1721706 - already processed (55/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1975090 - already processed (56/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1946146 - already processed (57/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 2018186 - already processed (58/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 2011735 - already processed (59/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1897622 - already processed (60/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1973543 - already processed (61/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 2009462 - already processed (62/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 2011658 - already processed (63/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1944017 - already processed (64/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1892641 - already processed (65/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 2010078 - already processed (66/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1915632 - already processed (67/2605) 2025-12-01 12:32:08,125 [INFO] Skipping bill 1996393 - already processed (68/2605) 2025-12-01 12:32:08,125 [INFO] Processing 69/2605: Bill ID 1972479 2025-12-01 12:32:09,661 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:09,665 [ERROR] Failed to generate report for bill 1972479: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:10,683 [INFO] Skipping bill 1848589 - already processed (70/2605) 2025-12-01 12:32:10,684 [INFO] Skipping bill 1796695 - already processed (71/2605) 2025-12-01 12:32:10,684 [INFO] Skipping bill 1834299 - already processed (72/2605) 2025-12-01 12:32:10,684 [INFO] Skipping bill 1840453 - already processed (73/2605) 2025-12-01 12:32:10,684 [INFO] Skipping bill 1847401 - already processed (74/2605) 2025-12-01 12:32:10,684 [INFO] Skipping bill 1849339 - already processed (75/2605) 2025-12-01 12:32:10,684 [INFO] Skipping bill 1845122 - already processed (76/2605) 2025-12-01 12:32:10,684 [INFO] Skipping bill 1796692 - already processed (77/2605) 2025-12-01 12:32:10,685 [INFO] Skipping bill 1846289 - already processed (78/2605) 2025-12-01 12:32:10,685 [INFO] Skipping bill 1813231 - already processed (79/2605) 2025-12-01 12:32:10,685 [INFO] Skipping bill 1848433 - already processed (80/2605) 2025-12-01 12:32:10,685 [INFO] Skipping bill 1796691 - already processed (81/2605) 2025-12-01 12:32:10,685 [INFO] Skipping bill 1848536 - already processed (82/2605) 2025-12-01 12:32:10,685 [INFO] Skipping bill 1819737 - already processed (83/2605) 2025-12-01 12:32:10,685 [INFO] Skipping bill 1829037 - already processed (84/2605) 2025-12-01 12:32:10,685 [INFO] Skipping bill 1712200 - already processed (85/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1848424 - already processed (86/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1814918 - already processed (87/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1686429 - already processed (88/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1848359 - already processed (89/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1697069 - already processed (90/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1848453 - already processed (91/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1849513 - already processed (92/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1848521 - already processed (93/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1848425 - already processed (94/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1702816 - already processed (95/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1849367 - already processed (96/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1849520 - already processed (97/2605) 2025-12-01 12:32:10,686 [INFO] Skipping bill 1848530 - already processed (98/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1712027 - already processed (99/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1849659 - already processed (100/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1848478 - already processed (101/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1848387 - already processed (102/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1845137 - already processed (103/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1812205 - already processed (104/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1798416 - already processed (105/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1847351 - already processed (106/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1693943 - already processed (107/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1686454 - already processed (108/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1847404 - already processed (109/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1683775 - already processed (110/2605) 2025-12-01 12:32:10,687 [INFO] Skipping bill 1835452 - already processed (111/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1709727 - already processed (112/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1849724 - already processed (113/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1761500 - already processed (114/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1697048 - already processed (115/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1860070 - already processed (116/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1771300 - already processed (117/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1709708 - already processed (118/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1848529 - already processed (119/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1845179 - already processed (120/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1849404 - already processed (121/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1714444 - already processed (122/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1824468 - already processed (123/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1882346 - already processed (124/2605) 2025-12-01 12:32:10,688 [INFO] Skipping bill 1885654 - already processed (125/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1849359 - already processed (126/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1840414 - already processed (127/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1846229 - already processed (128/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1707510 - already processed (129/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1845188 - already processed (130/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1848524 - already processed (131/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1847496 - already processed (132/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1883008 - already processed (133/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1649620 - already processed (134/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1667841 - already processed (135/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1848476 - already processed (136/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1649670 - already processed (137/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1667891 - already processed (138/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1649612 - already processed (139/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1649615 - already processed (140/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1667833 - already processed (141/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1667836 - already processed (142/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1649618 - already processed (143/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1667839 - already processed (144/2605) 2025-12-01 12:32:10,689 [INFO] Skipping bill 1649630 - already processed (145/2605) 2025-12-01 12:32:10,690 [INFO] Skipping bill 1649619 - already processed (146/2605) 2025-12-01 12:32:10,690 [INFO] Skipping bill 1667851 - already processed (147/2605) 2025-12-01 12:32:10,690 [INFO] Skipping bill 1667840 - already processed (148/2605) 2025-12-01 12:32:10,690 [INFO] Processing 149/2605: Bill ID 1865211 2025-12-01 12:32:11,500 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:11,501 [ERROR] Failed to generate report for bill 1865211: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:12,510 [INFO] Skipping bill 1667837 - already processed (150/2605) 2025-12-01 12:32:12,511 [INFO] Skipping bill 1667892 - already processed (151/2605) 2025-12-01 12:32:12,511 [INFO] Skipping bill 1649616 - already processed (152/2605) 2025-12-01 12:32:12,511 [INFO] Skipping bill 1649671 - already processed (153/2605) 2025-12-01 12:32:12,511 [INFO] Processing 154/2605: Bill ID 1726105 2025-12-01 12:32:13,755 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:13,757 [ERROR] Failed to generate report for bill 1726105: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:14,768 [INFO] Skipping bill 1978757 - already processed (155/2605) 2025-12-01 12:32:14,768 [INFO] Skipping bill 1980543 - already processed (156/2605) 2025-12-01 12:32:14,769 [INFO] Skipping bill 1893423 - already processed (157/2605) 2025-12-01 12:32:14,769 [INFO] Skipping bill 1964699 - already processed (158/2605) 2025-12-01 12:32:14,769 [INFO] Skipping bill 1978599 - already processed (159/2605) 2025-12-01 12:32:14,769 [INFO] Skipping bill 1980563 - already processed (160/2605) 2025-12-01 12:32:14,769 [INFO] Skipping bill 1976585 - already processed (161/2605) 2025-12-01 12:32:14,769 [INFO] Skipping bill 1904800 - already processed (162/2605) 2025-12-01 12:32:14,769 [INFO] Skipping bill 1974530 - already processed (163/2605) 2025-12-01 12:32:14,769 [INFO] Skipping bill 1964676 - already processed (164/2605) 2025-12-01 12:32:14,770 [INFO] Skipping bill 1955758 - already processed (165/2605) 2025-12-01 12:32:14,770 [INFO] Skipping bill 1941749 - already processed (166/2605) 2025-12-01 12:32:14,770 [INFO] Skipping bill 1976440 - already processed (167/2605) 2025-12-01 12:32:14,770 [INFO] Skipping bill 1978812 - already processed (168/2605) 2025-12-01 12:32:14,770 [INFO] Skipping bill 1978731 - already processed (169/2605) 2025-12-01 12:32:14,770 [INFO] Skipping bill 1949687 - already processed (170/2605) 2025-12-01 12:32:14,770 [INFO] Skipping bill 1980302 - already processed (171/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 2032041 - already processed (172/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1978672 - already processed (173/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1955756 - already processed (174/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1970455 - already processed (175/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1978694 - already processed (176/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1976550 - already processed (177/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1908207 - already processed (178/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1971712 - already processed (179/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1919273 - already processed (180/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1893452 - already processed (181/2605) 2025-12-01 12:32:14,771 [INFO] Skipping bill 1971760 - already processed (182/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1978553 - already processed (183/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1980501 - already processed (184/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1980139 - already processed (185/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1908210 - already processed (186/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1980228 - already processed (187/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1947445 - already processed (188/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1971753 - already processed (189/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1943407 - already processed (190/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1896630 - already processed (191/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1953097 - already processed (192/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1961095 - already processed (193/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1953091 - already processed (194/2605) 2025-12-01 12:32:14,772 [INFO] Skipping bill 1953081 - already processed (195/2605) 2025-12-01 12:32:14,773 [INFO] Skipping bill 1978871 - already processed (196/2605) 2025-12-01 12:32:14,773 [INFO] Skipping bill 1990396 - already processed (197/2605) 2025-12-01 12:32:14,773 [INFO] Processing 198/2605: Bill ID 1980067 2025-12-01 12:32:15,698 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:15,701 [ERROR] Failed to generate report for bill 1980067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:16,711 [INFO] Skipping bill 1970450 - already processed (199/2605) 2025-12-01 12:32:16,712 [INFO] Skipping bill 1904793 - already processed (200/2605) 2025-12-01 12:32:16,712 [INFO] Skipping bill 1964689 - already processed (201/2605) 2025-12-01 12:32:16,713 [INFO] Skipping bill 1933300 - already processed (202/2605) 2025-12-01 12:32:16,713 [INFO] Skipping bill 2036404 - already processed (203/2605) 2025-12-01 12:32:16,713 [INFO] Skipping bill 1949685 - already processed (204/2605) 2025-12-01 12:32:16,713 [INFO] Skipping bill 1976474 - already processed (205/2605) 2025-12-01 12:32:16,713 [INFO] Skipping bill 1898373 - already processed (206/2605) 2025-12-01 12:32:16,713 [INFO] Skipping bill 2042443 - already processed (207/2605) 2025-12-01 12:32:16,713 [INFO] Skipping bill 2005483 - already processed (208/2605) 2025-12-01 12:32:16,714 [INFO] Skipping bill 1968261 - already processed (209/2605) 2025-12-01 12:32:16,714 [INFO] Skipping bill 1980234 - already processed (210/2605) 2025-12-01 12:32:16,714 [INFO] Skipping bill 1978559 - already processed (211/2605) 2025-12-01 12:32:16,714 [INFO] Skipping bill 1974545 - already processed (212/2605) 2025-12-01 12:32:16,714 [INFO] Skipping bill 1908089 - already processed (213/2605) 2025-12-01 12:32:16,714 [INFO] Skipping bill 1939198 - already processed (214/2605) 2025-12-01 12:32:16,714 [INFO] Skipping bill 1939199 - already processed (215/2605) 2025-12-01 12:32:16,714 [INFO] Skipping bill 1908087 - already processed (216/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1908088 - already processed (217/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1939200 - already processed (218/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1939201 - already processed (219/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1908090 - already processed (220/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1939197 - already processed (221/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1908086 - already processed (222/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1651326 - already processed (223/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1747628 - already processed (224/2605) 2025-12-01 12:32:16,715 [INFO] Skipping bill 1871619 - already processed (225/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 1874953 - already processed (226/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 1831016 - already processed (227/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 1846007 - already processed (228/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 2026977 - already processed (229/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 2042502 - already processed (230/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 2042537 - already processed (231/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 2042540 - already processed (232/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 1907590 - already processed (233/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 1907863 - already processed (234/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 2022323 - already processed (235/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 1947638 - already processed (236/2605) 2025-12-01 12:32:16,716 [INFO] Skipping bill 1965815 - already processed (237/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 2042471 - already processed (238/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 2017117 - already processed (239/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 1973900 - already processed (240/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 2020829 - already processed (241/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 1718823 - already processed (242/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 1709526 - already processed (243/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 1709356 - already processed (244/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 1839016 - already processed (245/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 1859941 - already processed (246/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 1839023 - already processed (247/2605) 2025-12-01 12:32:16,717 [INFO] Skipping bill 1860727 - already processed (248/2605) 2025-12-01 12:32:16,717 [INFO] Processing 249/2605: Bill ID 1876979 2025-12-01 12:32:18,977 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:18,979 [ERROR] Failed to generate report for bill 1876979: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:19,989 [INFO] Skipping bill 1905069 - already processed (250/2605) 2025-12-01 12:32:19,990 [INFO] Skipping bill 1992824 - already processed (251/2605) 2025-12-01 12:32:19,990 [INFO] Skipping bill 1957876 - already processed (252/2605) 2025-12-01 12:32:19,990 [INFO] Skipping bill 1965500 - already processed (253/2605) 2025-12-01 12:32:19,990 [INFO] Skipping bill 1990151 - already processed (254/2605) 2025-12-01 12:32:19,990 [INFO] Skipping bill 1949174 - already processed (255/2605) 2025-12-01 12:32:19,990 [INFO] Skipping bill 1905038 - already processed (256/2605) 2025-12-01 12:32:19,990 [INFO] Skipping bill 1905159 - already processed (257/2605) 2025-12-01 12:32:19,991 [INFO] Skipping bill 1907650 - already processed (258/2605) 2025-12-01 12:32:19,991 [INFO] Skipping bill 1909616 - already processed (259/2605) 2025-12-01 12:32:19,991 [INFO] Skipping bill 1909665 - already processed (260/2605) 2025-12-01 12:32:19,991 [INFO] Skipping bill 1928585 - already processed (261/2605) 2025-12-01 12:32:19,991 [INFO] Skipping bill 1928759 - already processed (262/2605) 2025-12-01 12:32:19,991 [INFO] Skipping bill 1928904 - already processed (263/2605) 2025-12-01 12:32:19,991 [INFO] Skipping bill 1931737 - already processed (264/2605) 2025-12-01 12:32:19,992 [INFO] Skipping bill 1928076 - already processed (265/2605) 2025-12-01 12:32:19,992 [INFO] Skipping bill 1935956 - already processed (266/2605) 2025-12-01 12:32:19,992 [INFO] Skipping bill 1905222 - already processed (267/2605) 2025-12-01 12:32:19,992 [INFO] Skipping bill 1932777 - already processed (268/2605) 2025-12-01 12:32:19,992 [INFO] Skipping bill 1905141 - already processed (269/2605) 2025-12-01 12:32:19,992 [INFO] Processing 270/2605: Bill ID 2034928 2025-12-01 12:32:21,085 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:21,087 [ERROR] Failed to generate report for bill 2034928: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:21,144 [INFO] Saved 2596 reports to data/bill_reports.json 2025-12-01 12:32:21,145 [INFO] Progress: 270/2605 - Processed: 0, Skipped: 264, Errors: 6 2025-12-01 12:32:22,150 [INFO] Skipping bill 1820947 - already processed (271/2605) 2025-12-01 12:32:22,151 [INFO] Skipping bill 2038143 - already processed (272/2605) 2025-12-01 12:32:22,151 [INFO] Skipping bill 1946119 - already processed (273/2605) 2025-12-01 12:32:22,151 [INFO] Skipping bill 2038726 - already processed (274/2605) 2025-12-01 12:32:22,151 [INFO] Skipping bill 2015494 - already processed (275/2605) 2025-12-01 12:32:22,151 [INFO] Skipping bill 1754732 - already processed (276/2605) 2025-12-01 12:32:22,151 [INFO] Skipping bill 1716623 - already processed (277/2605) 2025-12-01 12:32:22,152 [INFO] Skipping bill 1723029 - already processed (278/2605) 2025-12-01 12:32:22,152 [INFO] Skipping bill 1749221 - already processed (279/2605) 2025-12-01 12:32:22,152 [INFO] Skipping bill 1756757 - already processed (280/2605) 2025-12-01 12:32:22,152 [INFO] Skipping bill 1722774 - already processed (281/2605) 2025-12-01 12:32:22,152 [INFO] Processing 282/2605: Bill ID 1746175 2025-12-01 12:32:23,994 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:23,997 [ERROR] Failed to generate report for bill 1746175: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:25,007 [INFO] Skipping bill 1749049 - already processed (283/2605) 2025-12-01 12:32:25,007 [INFO] Skipping bill 1799517 - already processed (284/2605) 2025-12-01 12:32:25,007 [INFO] Skipping bill 1799058 - already processed (285/2605) 2025-12-01 12:32:25,008 [INFO] Skipping bill 1792427 - already processed (286/2605) 2025-12-01 12:32:25,009 [INFO] Skipping bill 1791537 - already processed (287/2605) 2025-12-01 12:32:25,009 [INFO] Skipping bill 1793699 - already processed (288/2605) 2025-12-01 12:32:25,009 [INFO] Skipping bill 1784035 - already processed (289/2605) 2025-12-01 12:32:25,009 [INFO] Skipping bill 1789608 - already processed (290/2605) 2025-12-01 12:32:25,009 [INFO] Skipping bill 1797287 - already processed (291/2605) 2025-12-01 12:32:25,009 [INFO] Skipping bill 1799146 - already processed (292/2605) 2025-12-01 12:32:25,009 [INFO] Skipping bill 1799256 - already processed (293/2605) 2025-12-01 12:32:25,009 [INFO] Skipping bill 1799530 - already processed (294/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1799073 - already processed (295/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1798525 - already processed (296/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1812862 - already processed (297/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1799556 - already processed (298/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1793796 - already processed (299/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1840899 - already processed (300/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1849855 - already processed (301/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1796581 - already processed (302/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1785974 - already processed (303/2605) 2025-12-01 12:32:25,010 [INFO] Skipping bill 1799599 - already processed (304/2605) 2025-12-01 12:32:25,011 [INFO] Skipping bill 1799188 - already processed (305/2605) 2025-12-01 12:32:25,011 [INFO] Skipping bill 1834738 - already processed (306/2605) 2025-12-01 12:32:25,011 [INFO] Skipping bill 1799528 - already processed (307/2605) 2025-12-01 12:32:25,011 [INFO] Processing 308/2605: Bill ID 1829539 2025-12-01 12:32:26,452 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:26,456 [ERROR] Failed to generate report for bill 1829539: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:27,466 [INFO] Skipping bill 1953506 - already processed (309/2605) 2025-12-01 12:32:27,466 [INFO] Skipping bill 1969171 - already processed (310/2605) 2025-12-01 12:32:27,466 [INFO] Skipping bill 1963529 - already processed (311/2605) 2025-12-01 12:32:27,466 [INFO] Skipping bill 1973172 - already processed (312/2605) 2025-12-01 12:32:27,467 [INFO] Skipping bill 1977164 - already processed (313/2605) 2025-12-01 12:32:27,467 [INFO] Skipping bill 1984764 - already processed (314/2605) 2025-12-01 12:32:27,467 [INFO] Skipping bill 1988421 - already processed (315/2605) 2025-12-01 12:32:27,467 [INFO] Skipping bill 1963407 - already processed (316/2605) 2025-12-01 12:32:27,467 [INFO] Skipping bill 1977647 - already processed (317/2605) 2025-12-01 12:32:27,467 [INFO] Skipping bill 1985537 - already processed (318/2605) 2025-12-01 12:32:27,467 [INFO] Skipping bill 1988809 - already processed (319/2605) 2025-12-01 12:32:27,467 [INFO] Skipping bill 1989241 - already processed (320/2605) 2025-12-01 12:32:27,468 [INFO] Skipping bill 1980688 - already processed (321/2605) 2025-12-01 12:32:27,468 [INFO] Skipping bill 1985490 - already processed (322/2605) 2025-12-01 12:32:27,468 [INFO] Skipping bill 1987236 - already processed (323/2605) 2025-12-01 12:32:27,468 [INFO] Skipping bill 2009168 - already processed (324/2605) 2025-12-01 12:32:27,468 [INFO] Skipping bill 1985684 - already processed (325/2605) 2025-12-01 12:32:27,468 [INFO] Skipping bill 1982957 - already processed (326/2605) 2025-12-01 12:32:27,468 [INFO] Skipping bill 2009660 - already processed (327/2605) 2025-12-01 12:32:27,469 [INFO] Skipping bill 1987290 - already processed (328/2605) 2025-12-01 12:32:27,469 [INFO] Skipping bill 2021527 - already processed (329/2605) 2025-12-01 12:32:27,469 [INFO] Skipping bill 1984006 - already processed (330/2605) 2025-12-01 12:32:27,469 [INFO] Skipping bill 1944378 - already processed (331/2605) 2025-12-01 12:32:27,469 [INFO] Processing 332/2605: Bill ID 2016312 2025-12-01 12:32:28,854 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:28,855 [ERROR] Failed to generate report for bill 2016312: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:29,874 [INFO] Skipping bill 1975511 - already processed (333/2605) 2025-12-01 12:32:29,874 [INFO] Skipping bill 1807866 - already processed (334/2605) 2025-12-01 12:32:29,875 [INFO] Skipping bill 1825040 - already processed (335/2605) 2025-12-01 12:32:29,875 [INFO] Skipping bill 1824663 - already processed (336/2605) 2025-12-01 12:32:29,875 [INFO] Skipping bill 1827759 - already processed (337/2605) 2025-12-01 12:32:29,875 [INFO] Skipping bill 1807849 - already processed (338/2605) 2025-12-01 12:32:29,875 [INFO] Skipping bill 1852469 - already processed (339/2605) 2025-12-01 12:32:29,875 [INFO] Skipping bill 1724818 - already processed (340/2605) 2025-12-01 12:32:29,875 [INFO] Skipping bill 1827801 - already processed (341/2605) 2025-12-01 12:32:29,876 [INFO] Skipping bill 1842042 - already processed (342/2605) 2025-12-01 12:32:29,876 [INFO] Skipping bill 1800509 - already processed (343/2605) 2025-12-01 12:32:29,876 [INFO] Skipping bill 1829048 - already processed (344/2605) 2025-12-01 12:32:29,876 [INFO] Skipping bill 1691393 - already processed (345/2605) 2025-12-01 12:32:29,876 [INFO] Skipping bill 1684843 - already processed (346/2605) 2025-12-01 12:32:29,876 [INFO] Skipping bill 1945161 - already processed (347/2605) 2025-12-01 12:32:29,876 [INFO] Skipping bill 1947679 - already processed (348/2605) 2025-12-01 12:32:29,876 [INFO] Skipping bill 1943273 - already processed (349/2605) 2025-12-01 12:32:29,877 [INFO] Skipping bill 1919150 - already processed (350/2605) 2025-12-01 12:32:29,877 [INFO] Skipping bill 2012228 - already processed (351/2605) 2025-12-01 12:32:29,877 [INFO] Skipping bill 1990355 - already processed (352/2605) 2025-12-01 12:32:29,877 [INFO] Skipping bill 1960995 - already processed (353/2605) 2025-12-01 12:32:29,877 [INFO] Skipping bill 1968119 - already processed (354/2605) 2025-12-01 12:32:29,877 [INFO] Skipping bill 2006978 - already processed (355/2605) 2025-12-01 12:32:29,877 [INFO] Skipping bill 1974144 - already processed (356/2605) 2025-12-01 12:32:29,877 [INFO] Skipping bill 1974243 - already processed (357/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974425 - already processed (358/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 2016144 - already processed (359/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974177 - already processed (360/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974222 - already processed (361/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974239 - already processed (362/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974292 - already processed (363/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974356 - already processed (364/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974381 - already processed (365/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974418 - already processed (366/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1990318 - already processed (367/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1987837 - already processed (368/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1974421 - already processed (369/2605) 2025-12-01 12:32:29,878 [INFO] Skipping bill 1982057 - already processed (370/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1968164 - already processed (371/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1979990 - already processed (372/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1961023 - already processed (373/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1970366 - already processed (374/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1976266 - already processed (375/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1735435 - already processed (376/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1735103 - already processed (377/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1735239 - already processed (378/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1676639 - already processed (379/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1822936 - already processed (380/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1824099 - already processed (381/2605) 2025-12-01 12:32:29,879 [INFO] Skipping bill 1823066 - already processed (382/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1821100 - already processed (383/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1821376 - already processed (384/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1861884 - already processed (385/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1862091 - already processed (386/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1824408 - already processed (387/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1823094 - already processed (388/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1859976 - already processed (389/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1860020 - already processed (390/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1822457 - already processed (391/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1823240 - already processed (392/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1822425 - already processed (393/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1823305 - already processed (394/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1816605 - already processed (395/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1822519 - already processed (396/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1822760 - already processed (397/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1821542 - already processed (398/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1862395 - already processed (399/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1862180 - already processed (400/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1820992 - already processed (401/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1822908 - already processed (402/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1816124 - already processed (403/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1826161 - already processed (404/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1822451 - already processed (405/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1823328 - already processed (406/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1860844 - already processed (407/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1819671 - already processed (408/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1815658 - already processed (409/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1929168 - already processed (410/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1939103 - already processed (411/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1939150 - already processed (412/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1924410 - already processed (413/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1929804 - already processed (414/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1929561 - already processed (415/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1925992 - already processed (416/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1928926 - already processed (417/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1931961 - already processed (418/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1929636 - already processed (419/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1909994 - already processed (420/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1928408 - already processed (421/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1928598 - already processed (422/2605) 2025-12-01 12:32:29,880 [INFO] Skipping bill 1994243 - already processed (423/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1994303 - already processed (424/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1929659 - already processed (425/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1932766 - already processed (426/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1928570 - already processed (427/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1934608 - already processed (428/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1928364 - already processed (429/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1929760 - already processed (430/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1933272 - already processed (431/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1929496 - already processed (432/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1990347 - already processed (433/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1995251 - already processed (434/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1995449 - already processed (435/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1995259 - already processed (436/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1995271 - already processed (437/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1995747 - already processed (438/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1991557 - already processed (439/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1991563 - already processed (440/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1995783 - already processed (441/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1929457 - already processed (442/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1915997 - already processed (443/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1933178 - already processed (444/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1992758 - already processed (445/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1993026 - already processed (446/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1995569 - already processed (447/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1992805 - already processed (448/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1995900 - already processed (449/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1993019 - already processed (450/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1847870 - already processed (451/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1812600 - already processed (452/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1848008 - already processed (453/2605) 2025-12-01 12:32:29,881 [INFO] Skipping bill 1825516 - already processed (454/2605) 2025-12-01 12:32:29,881 [INFO] Processing 455/2605: Bill ID 1845026 2025-12-01 12:32:30,446 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:30,447 [ERROR] Failed to generate report for bill 1845026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:31,456 [INFO] Skipping bill 1962312 - already processed (456/2605) 2025-12-01 12:32:31,456 [INFO] Skipping bill 1954011 - already processed (457/2605) 2025-12-01 12:32:31,456 [INFO] Skipping bill 1991380 - already processed (458/2605) 2025-12-01 12:32:31,457 [INFO] Processing 459/2605: Bill ID 2011846 2025-12-01 12:32:31,815 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:31,817 [ERROR] Failed to generate report for bill 2011846: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:32,827 [INFO] Skipping bill 1838778 - already processed (460/2605) 2025-12-01 12:32:32,828 [INFO] Skipping bill 1713666 - already processed (461/2605) 2025-12-01 12:32:32,828 [INFO] Skipping bill 1837146 - already processed (462/2605) 2025-12-01 12:32:32,828 [INFO] Skipping bill 1842401 - already processed (463/2605) 2025-12-01 12:32:32,828 [INFO] Skipping bill 1838992 - already processed (464/2605) 2025-12-01 12:32:32,829 [INFO] Skipping bill 1840748 - already processed (465/2605) 2025-12-01 12:32:32,829 [INFO] Skipping bill 1841780 - already processed (466/2605) 2025-12-01 12:32:32,829 [INFO] Skipping bill 1831504 - already processed (467/2605) 2025-12-01 12:32:32,829 [INFO] Skipping bill 1832905 - already processed (468/2605) 2025-12-01 12:32:32,829 [INFO] Skipping bill 1843072 - already processed (469/2605) 2025-12-01 12:32:32,829 [INFO] Skipping bill 1839869 - already processed (470/2605) 2025-12-01 12:32:32,829 [INFO] Skipping bill 1814012 - already processed (471/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1842520 - already processed (472/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1835262 - already processed (473/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1843020 - already processed (474/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1878243 - already processed (475/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1893072 - already processed (476/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1713755 - already processed (477/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1842316 - already processed (478/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1838852 - already processed (479/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1838748 - already processed (480/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1635340 - already processed (481/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1713127 - already processed (482/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1818470 - already processed (483/2605) 2025-12-01 12:32:32,830 [INFO] Skipping bill 1837189 - already processed (484/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1635556 - already processed (485/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1692465 - already processed (486/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1843326 - already processed (487/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1822203 - already processed (488/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1838434 - already processed (489/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1714042 - already processed (490/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1840824 - already processed (491/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1810043 - already processed (492/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1762665 - already processed (493/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1831619 - already processed (494/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1712988 - already processed (495/2605) 2025-12-01 12:32:32,831 [INFO] Skipping bill 1704077 - already processed (496/2605) 2025-12-01 12:32:32,832 [INFO] Skipping bill 1712903 - already processed (497/2605) 2025-12-01 12:32:32,832 [INFO] Skipping bill 1818714 - already processed (498/2605) 2025-12-01 12:32:32,832 [INFO] Skipping bill 1842743 - already processed (499/2605) 2025-12-01 12:32:32,832 [INFO] Processing 500/2605: Bill ID 1838518 2025-12-01 12:32:35,260 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:35,262 [ERROR] Failed to generate report for bill 1838518: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:35,319 [INFO] Saved 2596 reports to data/bill_reports.json 2025-12-01 12:32:35,319 [INFO] Progress: 500/2605 - Processed: 0, Skipped: 488, Errors: 12 2025-12-01 12:32:36,324 [INFO] Processing 501/2605: Bill ID 1794181 2025-12-01 12:32:36,895 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:36,896 [ERROR] Failed to generate report for bill 1794181: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:37,904 [INFO] Processing 502/2605: Bill ID 1708593 2025-12-01 12:32:38,432 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:38,433 [ERROR] Failed to generate report for bill 1708593: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:39,441 [INFO] Processing 503/2605: Bill ID 1704148 2025-12-01 12:32:41,403 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:41,405 [ERROR] Failed to generate report for bill 1704148: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:42,417 [INFO] Processing 504/2605: Bill ID 1704278 2025-12-01 12:32:44,682 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:44,684 [ERROR] Failed to generate report for bill 1704278: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:45,695 [INFO] Skipping bill 1714051 - already processed (505/2605) 2025-12-01 12:32:45,695 [INFO] Skipping bill 1951980 - already processed (506/2605) 2025-12-01 12:32:45,695 [INFO] Skipping bill 1942546 - already processed (507/2605) 2025-12-01 12:32:45,696 [INFO] Skipping bill 1954662 - already processed (508/2605) 2025-12-01 12:32:45,696 [INFO] Skipping bill 1962278 - already processed (509/2605) 2025-12-01 12:32:45,696 [INFO] Skipping bill 1959604 - already processed (510/2605) 2025-12-01 12:32:45,696 [INFO] Skipping bill 1961963 - already processed (511/2605) 2025-12-01 12:32:45,696 [INFO] Skipping bill 1906420 - already processed (512/2605) 2025-12-01 12:32:45,696 [INFO] Skipping bill 1959700 - already processed (513/2605) 2025-12-01 12:32:45,696 [INFO] Skipping bill 1960223 - already processed (514/2605) 2025-12-01 12:32:45,697 [INFO] Skipping bill 1955104 - already processed (515/2605) 2025-12-01 12:32:45,697 [INFO] Skipping bill 1962582 - already processed (516/2605) 2025-12-01 12:32:45,697 [INFO] Skipping bill 1945671 - already processed (517/2605) 2025-12-01 12:32:45,697 [INFO] Skipping bill 1927329 - already processed (518/2605) 2025-12-01 12:32:45,697 [INFO] Skipping bill 1950703 - already processed (519/2605) 2025-12-01 12:32:45,697 [INFO] Skipping bill 1962488 - already processed (520/2605) 2025-12-01 12:32:45,697 [INFO] Skipping bill 1945525 - already processed (521/2605) 2025-12-01 12:32:45,698 [INFO] Skipping bill 1958920 - already processed (522/2605) 2025-12-01 12:32:45,698 [INFO] Skipping bill 1962097 - already processed (523/2605) 2025-12-01 12:32:45,698 [INFO] Skipping bill 1963192 - already processed (524/2605) 2025-12-01 12:32:45,698 [INFO] Skipping bill 1947169 - already processed (525/2605) 2025-12-01 12:32:45,698 [INFO] Skipping bill 1961929 - already processed (526/2605) 2025-12-01 12:32:45,698 [INFO] Skipping bill 1962057 - already processed (527/2605) 2025-12-01 12:32:45,698 [INFO] Skipping bill 1973797 - already processed (528/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1963087 - already processed (529/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1940139 - already processed (530/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1941211 - already processed (531/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1906434 - already processed (532/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1963178 - already processed (533/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1954188 - already processed (534/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1954475 - already processed (535/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1957381 - already processed (536/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1962329 - already processed (537/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1962675 - already processed (538/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1935756 - already processed (539/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1945467 - already processed (540/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1907066 - already processed (541/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1985138 - already processed (542/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1961501 - already processed (543/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 1962291 - already processed (544/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 2034790 - already processed (545/2605) 2025-12-01 12:32:45,699 [INFO] Skipping bill 2047690 - already processed (546/2605) 2025-12-01 12:32:45,700 [INFO] Skipping bill 2052256 - already processed (547/2605) 2025-12-01 12:32:45,700 [INFO] Skipping bill 1962885 - already processed (548/2605) 2025-12-01 12:32:45,700 [INFO] Skipping bill 1960413 - already processed (549/2605) 2025-12-01 12:32:45,700 [INFO] Skipping bill 1959956 - already processed (550/2605) 2025-12-01 12:32:45,700 [INFO] Processing 551/2605: Bill ID 1962986 2025-12-01 12:32:48,880 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:48,882 [ERROR] Failed to generate report for bill 1962986: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:49,892 [INFO] Processing 552/2605: Bill ID 1960510 2025-12-01 12:32:50,516 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:50,519 [ERROR] Failed to generate report for bill 1960510: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:51,528 [INFO] Skipping bill 1962952 - already processed (553/2605) 2025-12-01 12:32:51,529 [INFO] Processing 554/2605: Bill ID 1645841 2025-12-01 12:32:52,054 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:52,056 [ERROR] Failed to generate report for bill 1645841: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:53,065 [INFO] Skipping bill 1799709 - already processed (555/2605) 2025-12-01 12:32:53,066 [INFO] Skipping bill 1797422 - already processed (556/2605) 2025-12-01 12:32:53,066 [INFO] Skipping bill 1801018 - already processed (557/2605) 2025-12-01 12:32:53,066 [INFO] Skipping bill 1799688 - already processed (558/2605) 2025-12-01 12:32:53,066 [INFO] Skipping bill 1909475 - already processed (559/2605) 2025-12-01 12:32:53,066 [INFO] Skipping bill 1921138 - already processed (560/2605) 2025-12-01 12:32:53,066 [INFO] Skipping bill 1917007 - already processed (561/2605) 2025-12-01 12:32:53,066 [INFO] Skipping bill 1921879 - already processed (562/2605) 2025-12-01 12:32:53,067 [INFO] Skipping bill 1915249 - already processed (563/2605) 2025-12-01 12:32:53,067 [INFO] Skipping bill 1912345 - already processed (564/2605) 2025-12-01 12:32:53,067 [INFO] Processing 565/2605: Bill ID 1897676 2025-12-01 12:32:53,583 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:53,584 [ERROR] Failed to generate report for bill 1897676: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:54,593 [INFO] Skipping bill 1847772 - already processed (566/2605) 2025-12-01 12:32:54,593 [INFO] Skipping bill 1825218 - already processed (567/2605) 2025-12-01 12:32:54,593 [INFO] Skipping bill 1839463 - already processed (568/2605) 2025-12-01 12:32:54,593 [INFO] Skipping bill 1665194 - already processed (569/2605) 2025-12-01 12:32:54,594 [INFO] Skipping bill 1708118 - already processed (570/2605) 2025-12-01 12:32:54,594 [INFO] Skipping bill 1802090 - already processed (571/2605) 2025-12-01 12:32:54,594 [INFO] Skipping bill 1823725 - already processed (572/2605) 2025-12-01 12:32:54,594 [INFO] Skipping bill 1845657 - already processed (573/2605) 2025-12-01 12:32:54,594 [INFO] Skipping bill 1846612 - already processed (574/2605) 2025-12-01 12:32:54,594 [INFO] Skipping bill 1870077 - already processed (575/2605) 2025-12-01 12:32:54,594 [INFO] Skipping bill 1870897 - already processed (576/2605) 2025-12-01 12:32:54,594 [INFO] Skipping bill 1761153 - already processed (577/2605) 2025-12-01 12:32:54,595 [INFO] Skipping bill 1760883 - already processed (578/2605) 2025-12-01 12:32:54,595 [INFO] Skipping bill 1752922 - already processed (579/2605) 2025-12-01 12:32:54,595 [INFO] Skipping bill 1873484 - already processed (580/2605) 2025-12-01 12:32:54,595 [INFO] Skipping bill 1990915 - already processed (581/2605) 2025-12-01 12:32:54,595 [INFO] Skipping bill 1969038 - already processed (582/2605) 2025-12-01 12:32:54,595 [INFO] Skipping bill 1993838 - already processed (583/2605) 2025-12-01 12:32:54,595 [INFO] Skipping bill 1958795 - already processed (584/2605) 2025-12-01 12:32:54,596 [INFO] Skipping bill 1977734 - already processed (585/2605) 2025-12-01 12:32:54,596 [INFO] Skipping bill 1937592 - already processed (586/2605) 2025-12-01 12:32:54,596 [INFO] Skipping bill 1963811 - already processed (587/2605) 2025-12-01 12:32:54,596 [INFO] Skipping bill 2029033 - already processed (588/2605) 2025-12-01 12:32:54,596 [INFO] Skipping bill 2026836 - already processed (589/2605) 2025-12-01 12:32:54,596 [INFO] Skipping bill 2027180 - already processed (590/2605) 2025-12-01 12:32:54,596 [INFO] Skipping bill 2021349 - already processed (591/2605) 2025-12-01 12:32:54,596 [INFO] Skipping bill 2030059 - already processed (592/2605) 2025-12-01 12:32:54,597 [INFO] Skipping bill 1823829 - already processed (593/2605) 2025-12-01 12:32:54,597 [INFO] Skipping bill 1824037 - already processed (594/2605) 2025-12-01 12:32:54,597 [INFO] Skipping bill 1850989 - already processed (595/2605) 2025-12-01 12:32:54,597 [INFO] Skipping bill 1826921 - already processed (596/2605) 2025-12-01 12:32:54,597 [INFO] Skipping bill 1690087 - already processed (597/2605) 2025-12-01 12:32:54,597 [INFO] Processing 598/2605: Bill ID 1693524 2025-12-01 12:32:55,385 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:55,387 [ERROR] Failed to generate report for bill 1693524: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:56,396 [INFO] Skipping bill 1665637 - already processed (599/2605) 2025-12-01 12:32:56,397 [INFO] Skipping bill 1682635 - already processed (600/2605) 2025-12-01 12:32:56,397 [INFO] Processing 601/2605: Bill ID 1692213 2025-12-01 12:32:57,097 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:57,099 [ERROR] Failed to generate report for bill 1692213: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:58,108 [INFO] Processing 602/2605: Bill ID 1846626 2025-12-01 12:32:58,742 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:32:58,744 [ERROR] Failed to generate report for bill 1846626: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:32:59,757 [INFO] Processing 603/2605: Bill ID 1846675 2025-12-01 12:33:00,525 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:00,528 [ERROR] Failed to generate report for bill 1846675: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:01,535 [INFO] Skipping bill 1653927 - already processed (604/2605) 2025-12-01 12:33:01,535 [INFO] Skipping bill 1959326 - already processed (605/2605) 2025-12-01 12:33:01,535 [INFO] Skipping bill 1948632 - already processed (606/2605) 2025-12-01 12:33:01,536 [INFO] Skipping bill 1955060 - already processed (607/2605) 2025-12-01 12:33:01,536 [INFO] Skipping bill 1946546 - already processed (608/2605) 2025-12-01 12:33:01,536 [INFO] Processing 609/2605: Bill ID 1916487 2025-12-01 12:33:02,270 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:02,272 [ERROR] Failed to generate report for bill 1916487: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:03,281 [INFO] Skipping bill 1949165 - already processed (610/2605) 2025-12-01 12:33:03,282 [INFO] Processing 611/2605: Bill ID 1938020 2025-12-01 12:33:04,090 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:04,092 [ERROR] Failed to generate report for bill 1938020: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:05,101 [INFO] Processing 612/2605: Bill ID 1937464 2025-12-01 12:33:05,802 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:05,804 [ERROR] Failed to generate report for bill 1937464: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:06,814 [INFO] Processing 613/2605: Bill ID 1713253 2025-12-01 12:33:07,454 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:07,456 [ERROR] Failed to generate report for bill 1713253: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:08,465 [INFO] Skipping bill 1804283 - already processed (614/2605) 2025-12-01 12:33:08,465 [INFO] Skipping bill 1795473 - already processed (615/2605) 2025-12-01 12:33:08,466 [INFO] Skipping bill 1855405 - already processed (616/2605) 2025-12-01 12:33:08,466 [INFO] Skipping bill 1848823 - already processed (617/2605) 2025-12-01 12:33:08,466 [INFO] Skipping bill 1842483 - already processed (618/2605) 2025-12-01 12:33:08,466 [INFO] Skipping bill 1854786 - already processed (619/2605) 2025-12-01 12:33:08,466 [INFO] Skipping bill 1795485 - already processed (620/2605) 2025-12-01 12:33:08,466 [INFO] Skipping bill 1854739 - already processed (621/2605) 2025-12-01 12:33:08,466 [INFO] Skipping bill 1799043 - already processed (622/2605) 2025-12-01 12:33:08,467 [INFO] Skipping bill 1974284 - already processed (623/2605) 2025-12-01 12:33:08,467 [INFO] Skipping bill 1974163 - already processed (624/2605) 2025-12-01 12:33:08,467 [INFO] Skipping bill 1994222 - already processed (625/2605) 2025-12-01 12:33:08,467 [INFO] Skipping bill 1970124 - already processed (626/2605) 2025-12-01 12:33:08,467 [INFO] Skipping bill 1908054 - already processed (627/2605) 2025-12-01 12:33:08,467 [INFO] Skipping bill 1904666 - already processed (628/2605) 2025-12-01 12:33:08,467 [INFO] Skipping bill 1975714 - already processed (629/2605) 2025-12-01 12:33:08,468 [INFO] Skipping bill 1974214 - already processed (630/2605) 2025-12-01 12:33:08,468 [INFO] Skipping bill 1765786 - already processed (631/2605) 2025-12-01 12:33:08,468 [INFO] Skipping bill 1751941 - already processed (632/2605) 2025-12-01 12:33:08,468 [INFO] Skipping bill 1747213 - already processed (633/2605) 2025-12-01 12:33:08,468 [INFO] Skipping bill 1872579 - already processed (634/2605) 2025-12-01 12:33:08,468 [INFO] Skipping bill 1831630 - already processed (635/2605) 2025-12-01 12:33:08,468 [INFO] Skipping bill 1869553 - already processed (636/2605) 2025-12-01 12:33:08,468 [INFO] Skipping bill 1856482 - already processed (637/2605) 2025-12-01 12:33:08,469 [INFO] Skipping bill 1877177 - already processed (638/2605) 2025-12-01 12:33:08,469 [INFO] Skipping bill 1856535 - already processed (639/2605) 2025-12-01 12:33:08,469 [INFO] Processing 640/2605: Bill ID 1856106 2025-12-01 12:33:09,021 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:09,022 [ERROR] Failed to generate report for bill 1856106: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:09,092 [INFO] Saved 2596 reports to data/bill_reports.json 2025-12-01 12:33:09,092 [INFO] Progress: 640/2605 - Processed: 0, Skipped: 611, Errors: 29 2025-12-01 12:33:10,097 [INFO] Skipping bill 2036140 - already processed (641/2605) 2025-12-01 12:33:10,098 [INFO] Skipping bill 2013841 - already processed (642/2605) 2025-12-01 12:33:10,098 [INFO] Skipping bill 2036152 - already processed (643/2605) 2025-12-01 12:33:10,099 [INFO] Skipping bill 2035054 - already processed (644/2605) 2025-12-01 12:33:10,099 [INFO] Skipping bill 2020836 - already processed (645/2605) 2025-12-01 12:33:10,099 [INFO] Skipping bill 2034414 - already processed (646/2605) 2025-12-01 12:33:10,099 [INFO] Skipping bill 2036147 - already processed (647/2605) 2025-12-01 12:33:10,099 [INFO] Skipping bill 2017245 - already processed (648/2605) 2025-12-01 12:33:10,099 [INFO] Processing 649/2605: Bill ID 2020366 2025-12-01 12:33:13,220 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:13,221 [ERROR] Failed to generate report for bill 2020366: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:14,231 [INFO] Skipping bill 1754734 - already processed (650/2605) 2025-12-01 12:33:14,232 [INFO] Skipping bill 1766525 - already processed (651/2605) 2025-12-01 12:33:14,232 [INFO] Skipping bill 1993701 - already processed (652/2605) 2025-12-01 12:33:14,232 [INFO] Skipping bill 2024454 - already processed (653/2605) 2025-12-01 12:33:14,232 [INFO] Skipping bill 1989654 - already processed (654/2605) 2025-12-01 12:33:14,232 [INFO] Skipping bill 1923257 - already processed (655/2605) 2025-12-01 12:33:14,232 [INFO] Skipping bill 2012930 - already processed (656/2605) 2025-12-01 12:33:14,232 [INFO] Skipping bill 2022043 - already processed (657/2605) 2025-12-01 12:33:14,233 [INFO] Skipping bill 1977885 - already processed (658/2605) 2025-12-01 12:33:14,233 [INFO] Skipping bill 1903898 - already processed (659/2605) 2025-12-01 12:33:14,233 [INFO] Skipping bill 2022085 - already processed (660/2605) 2025-12-01 12:33:14,233 [INFO] Skipping bill 2024471 - already processed (661/2605) 2025-12-01 12:33:14,233 [INFO] Skipping bill 1962449 - already processed (662/2605) 2025-12-01 12:33:14,233 [INFO] Skipping bill 1948585 - already processed (663/2605) 2025-12-01 12:33:14,233 [INFO] Skipping bill 2027763 - already processed (664/2605) 2025-12-01 12:33:14,233 [INFO] Skipping bill 2038183 - already processed (665/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 2012908 - already processed (666/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 1703457 - already processed (667/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 1703326 - already processed (668/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 1703583 - already processed (669/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 1703488 - already processed (670/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 1694229 - already processed (671/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 1697293 - already processed (672/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 1694179 - already processed (673/2605) 2025-12-01 12:33:14,234 [INFO] Skipping bill 1707790 - already processed (674/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1691409 - already processed (675/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1679149 - already processed (676/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1697468 - already processed (677/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1703148 - already processed (678/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1835739 - already processed (679/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1840482 - already processed (680/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1842215 - already processed (681/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1838035 - already processed (682/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1842106 - already processed (683/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1839236 - already processed (684/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1839142 - already processed (685/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1838028 - already processed (686/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1837867 - already processed (687/2605) 2025-12-01 12:33:14,235 [INFO] Skipping bill 1835606 - already processed (688/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1825025 - already processed (689/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1826297 - already processed (690/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1847549 - already processed (691/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1839307 - already processed (692/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1842129 - already processed (693/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1837909 - already processed (694/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1797714 - already processed (695/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1839204 - already processed (696/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1835710 - already processed (697/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1837838 - already processed (698/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1837893 - already processed (699/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1835695 - already processed (700/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1837995 - already processed (701/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1842172 - already processed (702/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1817737 - already processed (703/2605) 2025-12-01 12:33:14,236 [INFO] Skipping bill 1953268 - already processed (704/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1961326 - already processed (705/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1961123 - already processed (706/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1953218 - already processed (707/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1945231 - already processed (708/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1949851 - already processed (709/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1945281 - already processed (710/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1945285 - already processed (711/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1949794 - already processed (712/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1949746 - already processed (713/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1949835 - already processed (714/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1961190 - already processed (715/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1953113 - already processed (716/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1936713 - already processed (717/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1939378 - already processed (718/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1909925 - already processed (719/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1961341 - already processed (720/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1922403 - already processed (721/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1899660 - already processed (722/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1961327 - already processed (723/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1953223 - already processed (724/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1953246 - already processed (725/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1955835 - already processed (726/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1933617 - already processed (727/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1945335 - already processed (728/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1961410 - already processed (729/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1926508 - already processed (730/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1943426 - already processed (731/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1949808 - already processed (732/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1949848 - already processed (733/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1947517 - already processed (734/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1945267 - already processed (735/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1961205 - already processed (736/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1953214 - already processed (737/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1943446 - already processed (738/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1973042 - already processed (739/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1961299 - already processed (740/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1933601 - already processed (741/2605) 2025-12-01 12:33:14,237 [INFO] Skipping bill 1933621 - already processed (742/2605) 2025-12-01 12:33:14,237 [INFO] Processing 743/2605: Bill ID 1919287 2025-12-01 12:33:14,588 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:14,589 [ERROR] Failed to generate report for bill 1919287: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:15,597 [INFO] Skipping bill 1933460 - already processed (744/2605) 2025-12-01 12:33:15,598 [INFO] Skipping bill 1933670 - already processed (745/2605) 2025-12-01 12:33:15,598 [INFO] Skipping bill 1922377 - already processed (746/2605) 2025-12-01 12:33:15,598 [INFO] Skipping bill 1735361 - already processed (747/2605) 2025-12-01 12:33:15,598 [INFO] Skipping bill 1742559 - already processed (748/2605) 2025-12-01 12:33:15,598 [INFO] Skipping bill 1775856 - already processed (749/2605) 2025-12-01 12:33:15,598 [INFO] Skipping bill 1738097 - already processed (750/2605) 2025-12-01 12:33:15,598 [INFO] Skipping bill 1794760 - already processed (751/2605) 2025-12-01 12:33:15,599 [INFO] Skipping bill 1736131 - already processed (752/2605) 2025-12-01 12:33:15,599 [INFO] Skipping bill 1885778 - already processed (753/2605) 2025-12-01 12:33:15,599 [INFO] Skipping bill 1808592 - already processed (754/2605) 2025-12-01 12:33:15,599 [INFO] Skipping bill 1878825 - already processed (755/2605) 2025-12-01 12:33:15,599 [INFO] Skipping bill 1884638 - already processed (756/2605) 2025-12-01 12:33:15,599 [INFO] Skipping bill 1738996 - already processed (757/2605) 2025-12-01 12:33:15,599 [INFO] Skipping bill 1878228 - already processed (758/2605) 2025-12-01 12:33:15,600 [INFO] Skipping bill 1872865 - already processed (759/2605) 2025-12-01 12:33:15,600 [INFO] Skipping bill 1881167 - already processed (760/2605) 2025-12-01 12:33:15,600 [INFO] Skipping bill 1881743 - already processed (761/2605) 2025-12-01 12:33:15,600 [INFO] Skipping bill 1852772 - already processed (762/2605) 2025-12-01 12:33:15,600 [INFO] Skipping bill 1884104 - already processed (763/2605) 2025-12-01 12:33:15,600 [INFO] Skipping bill 1738794 - already processed (764/2605) 2025-12-01 12:33:15,600 [INFO] Skipping bill 1893080 - already processed (765/2605) 2025-12-01 12:33:15,600 [INFO] Skipping bill 1881922 - already processed (766/2605) 2025-12-01 12:33:15,601 [INFO] Skipping bill 1883178 - already processed (767/2605) 2025-12-01 12:33:15,601 [INFO] Skipping bill 1881587 - already processed (768/2605) 2025-12-01 12:33:15,601 [INFO] Skipping bill 1884487 - already processed (769/2605) 2025-12-01 12:33:15,601 [INFO] Skipping bill 1859182 - already processed (770/2605) 2025-12-01 12:33:15,601 [INFO] Skipping bill 1866861 - already processed (771/2605) 2025-12-01 12:33:15,601 [INFO] Processing 772/2605: Bill ID 1891836 2025-12-01 12:33:16,147 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:16,150 [ERROR] Failed to generate report for bill 1891836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:17,159 [INFO] Skipping bill 1883738 - already processed (773/2605) 2025-12-01 12:33:17,160 [INFO] Skipping bill 1682652 - already processed (774/2605) 2025-12-01 12:33:17,160 [INFO] Skipping bill 1742464 - already processed (775/2605) 2025-12-01 12:33:17,160 [INFO] Skipping bill 1728366 - already processed (776/2605) 2025-12-01 12:33:17,160 [INFO] Skipping bill 1726524 - already processed (777/2605) 2025-12-01 12:33:17,160 [INFO] Skipping bill 1737208 - already processed (778/2605) 2025-12-01 12:33:17,160 [INFO] Skipping bill 1749398 - already processed (779/2605) 2025-12-01 12:33:17,161 [INFO] Skipping bill 1738008 - already processed (780/2605) 2025-12-01 12:33:17,161 [INFO] Skipping bill 1735894 - already processed (781/2605) 2025-12-01 12:33:17,161 [INFO] Skipping bill 1841416 - already processed (782/2605) 2025-12-01 12:33:17,161 [INFO] Skipping bill 1736739 - already processed (783/2605) 2025-12-01 12:33:17,161 [INFO] Skipping bill 1737586 - already processed (784/2605) 2025-12-01 12:33:17,161 [INFO] Skipping bill 1884557 - already processed (785/2605) 2025-12-01 12:33:17,161 [INFO] Processing 786/2605: Bill ID 1875094 2025-12-01 12:33:18,045 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:18,060 [ERROR] Failed to generate report for bill 1875094: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:19,071 [INFO] Processing 787/2605: Bill ID 1755026 2025-12-01 12:33:19,815 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:19,816 [ERROR] Failed to generate report for bill 1755026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:20,825 [INFO] Processing 788/2605: Bill ID 1871591 2025-12-01 12:33:21,575 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:21,577 [ERROR] Failed to generate report for bill 1871591: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:22,583 [INFO] Processing 789/2605: Bill ID 1760451 2025-12-01 12:33:23,496 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:23,499 [ERROR] Failed to generate report for bill 1760451: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:24,510 [INFO] Processing 790/2605: Bill ID 1880948 2025-12-01 12:33:25,459 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:25,461 [ERROR] Failed to generate report for bill 1880948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:25,517 [INFO] Saved 2596 reports to data/bill_reports.json 2025-12-01 12:33:25,518 [INFO] Progress: 790/2605 - Processed: 0, Skipped: 753, Errors: 37 2025-12-01 12:33:26,523 [INFO] Processing 791/2605: Bill ID 1775764 2025-12-01 12:33:27,494 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:27,502 [ERROR] Failed to generate report for bill 1775764: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:28,511 [INFO] Processing 792/2605: Bill ID 1884634 2025-12-01 12:33:29,669 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:29,671 [ERROR] Failed to generate report for bill 1884634: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:30,682 [INFO] Skipping bill 2000828 - already processed (793/2605) 2025-12-01 12:33:30,683 [INFO] Skipping bill 2001551 - already processed (794/2605) 2025-12-01 12:33:30,683 [INFO] Skipping bill 1997130 - already processed (795/2605) 2025-12-01 12:33:30,683 [INFO] Skipping bill 2046647 - already processed (796/2605) 2025-12-01 12:33:30,683 [INFO] Skipping bill 2004206 - already processed (797/2605) 2025-12-01 12:33:30,683 [INFO] Skipping bill 1998184 - already processed (798/2605) 2025-12-01 12:33:30,683 [INFO] Skipping bill 2002506 - already processed (799/2605) 2025-12-01 12:33:30,683 [INFO] Skipping bill 2002695 - already processed (800/2605) 2025-12-01 12:33:30,684 [INFO] Skipping bill 2047070 - already processed (801/2605) 2025-12-01 12:33:30,684 [INFO] Skipping bill 2002923 - already processed (802/2605) 2025-12-01 12:33:30,684 [INFO] Skipping bill 1998946 - already processed (803/2605) 2025-12-01 12:33:30,684 [INFO] Skipping bill 1997259 - already processed (804/2605) 2025-12-01 12:33:30,684 [INFO] Skipping bill 2001269 - already processed (805/2605) 2025-12-01 12:33:30,684 [INFO] Skipping bill 2000625 - already processed (806/2605) 2025-12-01 12:33:30,684 [INFO] Skipping bill 2002705 - already processed (807/2605) 2025-12-01 12:33:30,685 [INFO] Skipping bill 2046676 - already processed (808/2605) 2025-12-01 12:33:30,685 [INFO] Skipping bill 2046660 - already processed (809/2605) 2025-12-01 12:33:30,685 [INFO] Skipping bill 2003933 - already processed (810/2605) 2025-12-01 12:33:30,685 [INFO] Skipping bill 1997268 - already processed (811/2605) 2025-12-01 12:33:30,685 [INFO] Skipping bill 2019724 - already processed (812/2605) 2025-12-01 12:33:30,685 [INFO] Skipping bill 1997990 - already processed (813/2605) 2025-12-01 12:33:30,685 [INFO] Skipping bill 1998675 - already processed (814/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 2002243 - already processed (815/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 1997584 - already processed (816/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 2002929 - already processed (817/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 2001175 - already processed (818/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 1998815 - already processed (819/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 1998575 - already processed (820/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 1999210 - already processed (821/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 2001320 - already processed (822/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 2053304 - already processed (823/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 2001993 - already processed (824/2605) 2025-12-01 12:33:30,686 [INFO] Skipping bill 1999288 - already processed (825/2605) 2025-12-01 12:33:30,687 [INFO] Skipping bill 1998331 - already processed (826/2605) 2025-12-01 12:33:30,687 [INFO] Skipping bill 2003746 - already processed (827/2605) 2025-12-01 12:33:30,687 [INFO] Skipping bill 1927181 - already processed (828/2605) 2025-12-01 12:33:30,687 [INFO] Skipping bill 2030259 - already processed (829/2605) 2025-12-01 12:33:30,687 [INFO] Skipping bill 1997622 - already processed (830/2605) 2025-12-01 12:33:30,687 [INFO] Processing 831/2605: Bill ID 2028594 2025-12-01 12:33:31,482 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:31,483 [ERROR] Failed to generate report for bill 2028594: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:32,492 [INFO] Processing 832/2605: Bill ID 2038620 2025-12-01 12:33:33,509 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:33,510 [ERROR] Failed to generate report for bill 2038620: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:34,520 [INFO] Processing 833/2605: Bill ID 2024637 2025-12-01 12:33:35,301 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:35,304 [ERROR] Failed to generate report for bill 2024637: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:36,318 [INFO] Skipping bill 1780182 - already processed (834/2605) 2025-12-01 12:33:36,319 [INFO] Skipping bill 1895692 - already processed (835/2605) 2025-12-01 12:33:36,319 [INFO] Skipping bill 1780190 - already processed (836/2605) 2025-12-01 12:33:36,319 [INFO] Skipping bill 1780196 - already processed (837/2605) 2025-12-01 12:33:36,319 [INFO] Skipping bill 1780166 - already processed (838/2605) 2025-12-01 12:33:36,319 [INFO] Skipping bill 1888099 - already processed (839/2605) 2025-12-01 12:33:36,319 [INFO] Skipping bill 1852983 - already processed (840/2605) 2025-12-01 12:33:36,319 [INFO] Skipping bill 1852813 - already processed (841/2605) 2025-12-01 12:33:36,320 [INFO] Skipping bill 2037995 - already processed (842/2605) 2025-12-01 12:33:36,320 [INFO] Skipping bill 2043787 - already processed (843/2605) 2025-12-01 12:33:36,320 [INFO] Skipping bill 2035241 - already processed (844/2605) 2025-12-01 12:33:36,320 [INFO] Skipping bill 2035278 - already processed (845/2605) 2025-12-01 12:33:36,320 [INFO] Skipping bill 2038014 - already processed (846/2605) 2025-12-01 12:33:36,320 [INFO] Skipping bill 2009885 - already processed (847/2605) 2025-12-01 12:33:36,320 [INFO] Skipping bill 2035768 - already processed (848/2605) 2025-12-01 12:33:36,321 [INFO] Skipping bill 2025453 - already processed (849/2605) 2025-12-01 12:33:36,321 [INFO] Skipping bill 2038856 - already processed (850/2605) 2025-12-01 12:33:36,321 [INFO] Skipping bill 2009892 - already processed (851/2605) 2025-12-01 12:33:36,321 [INFO] Skipping bill 1861260 - already processed (852/2605) 2025-12-01 12:33:36,321 [INFO] Skipping bill 1856334 - already processed (853/2605) 2025-12-01 12:33:36,321 [INFO] Skipping bill 1856821 - already processed (854/2605) 2025-12-01 12:33:36,321 [INFO] Skipping bill 1864646 - already processed (855/2605) 2025-12-01 12:33:36,321 [INFO] Skipping bill 1860647 - already processed (856/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1707979 - already processed (857/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1643078 - already processed (858/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1651590 - already processed (859/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1852405 - already processed (860/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1852812 - already processed (861/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1858711 - already processed (862/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1853103 - already processed (863/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1851979 - already processed (864/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1859186 - already processed (865/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1740589 - already processed (866/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1741802 - already processed (867/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1860410 - already processed (868/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1957720 - already processed (869/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1974786 - already processed (870/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1989670 - already processed (871/2605) 2025-12-01 12:33:36,322 [INFO] Skipping bill 1979597 - already processed (872/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1984757 - already processed (873/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2009204 - already processed (874/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2015254 - already processed (875/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1974962 - already processed (876/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2009276 - already processed (877/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1989103 - already processed (878/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1984950 - already processed (879/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1975975 - already processed (880/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2004610 - already processed (881/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2004938 - already processed (882/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1992603 - already processed (883/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1992640 - already processed (884/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1996293 - already processed (885/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2011831 - already processed (886/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2012661 - already processed (887/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1950967 - already processed (888/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1994787 - already processed (889/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2011159 - already processed (890/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2006411 - already processed (891/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2011256 - already processed (892/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2004789 - already processed (893/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1981280 - already processed (894/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 2009071 - already processed (895/2605) 2025-12-01 12:33:36,323 [INFO] Skipping bill 1967748 - already processed (896/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1707150 - already processed (897/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1669781 - already processed (898/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1643012 - already processed (899/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1848903 - already processed (900/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1848260 - already processed (901/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1820844 - already processed (902/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1851922 - already processed (903/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1850740 - already processed (904/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1838535 - already processed (905/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1851828 - already processed (906/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1863177 - already processed (907/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1852015 - already processed (908/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1818886 - already processed (909/2605) 2025-12-01 12:33:36,324 [INFO] Skipping bill 1852513 - already processed (910/2605) 2025-12-01 12:33:36,324 [INFO] Processing 911/2605: Bill ID 1851836 2025-12-01 12:33:37,007 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:37,009 [ERROR] Failed to generate report for bill 1851836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:38,023 [INFO] Skipping bill 1933975 - already processed (912/2605) 2025-12-01 12:33:38,023 [INFO] Skipping bill 1935092 - already processed (913/2605) 2025-12-01 12:33:38,023 [INFO] Skipping bill 1937681 - already processed (914/2605) 2025-12-01 12:33:38,023 [INFO] Skipping bill 1927333 - already processed (915/2605) 2025-12-01 12:33:38,024 [INFO] Skipping bill 1936069 - already processed (916/2605) 2025-12-01 12:33:38,024 [INFO] Skipping bill 1940299 - already processed (917/2605) 2025-12-01 12:33:38,024 [INFO] Skipping bill 1911677 - already processed (918/2605) 2025-12-01 12:33:38,024 [INFO] Skipping bill 1929973 - already processed (919/2605) 2025-12-01 12:33:38,024 [INFO] Skipping bill 1910359 - already processed (920/2605) 2025-12-01 12:33:38,024 [INFO] Skipping bill 1934687 - already processed (921/2605) 2025-12-01 12:33:38,024 [INFO] Skipping bill 1930038 - already processed (922/2605) 2025-12-01 12:33:38,024 [INFO] Skipping bill 1925325 - already processed (923/2605) 2025-12-01 12:33:38,025 [INFO] Skipping bill 1933890 - already processed (924/2605) 2025-12-01 12:33:38,025 [INFO] Skipping bill 1934898 - already processed (925/2605) 2025-12-01 12:33:38,025 [INFO] Skipping bill 2034194 - already processed (926/2605) 2025-12-01 12:33:38,025 [INFO] Skipping bill 1972440 - already processed (927/2605) 2025-12-01 12:33:38,025 [INFO] Skipping bill 1934020 - already processed (928/2605) 2025-12-01 12:33:38,025 [INFO] Skipping bill 1912210 - already processed (929/2605) 2025-12-01 12:33:38,025 [INFO] Skipping bill 1634819 - already processed (930/2605) 2025-12-01 12:33:38,026 [INFO] Skipping bill 1634779 - already processed (931/2605) 2025-12-01 12:33:38,026 [INFO] Skipping bill 1836873 - already processed (932/2605) 2025-12-01 12:33:38,026 [INFO] Skipping bill 1834678 - already processed (933/2605) 2025-12-01 12:33:38,026 [INFO] Skipping bill 1790707 - already processed (934/2605) 2025-12-01 12:33:38,026 [INFO] Skipping bill 1852775 - already processed (935/2605) 2025-12-01 12:33:38,026 [INFO] Skipping bill 1897040 - already processed (936/2605) 2025-12-01 12:33:38,026 [INFO] Skipping bill 1898466 - already processed (937/2605) 2025-12-01 12:33:38,026 [INFO] Skipping bill 1893847 - already processed (938/2605) 2025-12-01 12:33:38,027 [INFO] Skipping bill 1983834 - already processed (939/2605) 2025-12-01 12:33:38,027 [INFO] Skipping bill 1988287 - already processed (940/2605) 2025-12-01 12:33:38,027 [INFO] Skipping bill 1894415 - already processed (941/2605) 2025-12-01 12:33:38,027 [INFO] Skipping bill 1917533 - already processed (942/2605) 2025-12-01 12:33:38,027 [INFO] Skipping bill 1900966 - already processed (943/2605) 2025-12-01 12:33:38,027 [INFO] Skipping bill 1972401 - already processed (944/2605) 2025-12-01 12:33:38,027 [INFO] Skipping bill 1988699 - already processed (945/2605) 2025-12-01 12:33:38,028 [INFO] Skipping bill 1988844 - already processed (946/2605) 2025-12-01 12:33:38,028 [INFO] Skipping bill 1894126 - already processed (947/2605) 2025-12-01 12:33:38,028 [INFO] Skipping bill 1974757 - already processed (948/2605) 2025-12-01 12:33:38,028 [INFO] Skipping bill 1717719 - already processed (949/2605) 2025-12-01 12:33:38,028 [INFO] Skipping bill 1912107 - already processed (950/2605) 2025-12-01 12:33:38,028 [INFO] Skipping bill 1941091 - already processed (951/2605) 2025-12-01 12:33:38,028 [INFO] Skipping bill 1916250 - already processed (952/2605) 2025-12-01 12:33:38,028 [INFO] Skipping bill 1974033 - already processed (953/2605) 2025-12-01 12:33:38,029 [INFO] Skipping bill 1895954 - already processed (954/2605) 2025-12-01 12:33:38,029 [INFO] Skipping bill 1974042 - already processed (955/2605) 2025-12-01 12:33:38,029 [INFO] Skipping bill 1981849 - already processed (956/2605) 2025-12-01 12:33:38,029 [INFO] Skipping bill 1979780 - already processed (957/2605) 2025-12-01 12:33:38,029 [INFO] Skipping bill 1896111 - already processed (958/2605) 2025-12-01 12:33:38,029 [INFO] Skipping bill 1971592 - already processed (959/2605) 2025-12-01 12:33:38,029 [INFO] Skipping bill 1971640 - already processed (960/2605) 2025-12-01 12:33:38,030 [INFO] Skipping bill 1896588 - already processed (961/2605) 2025-12-01 12:33:38,030 [INFO] Skipping bill 1981663 - already processed (962/2605) 2025-12-01 12:33:38,030 [INFO] Skipping bill 1867796 - already processed (963/2605) 2025-12-01 12:33:38,030 [INFO] Skipping bill 1867828 - already processed (964/2605) 2025-12-01 12:33:38,030 [INFO] Skipping bill 1813907 - already processed (965/2605) 2025-12-01 12:33:38,030 [INFO] Skipping bill 1814493 - already processed (966/2605) 2025-12-01 12:33:38,030 [INFO] Skipping bill 1867439 - already processed (967/2605) 2025-12-01 12:33:38,030 [INFO] Skipping bill 1814241 - already processed (968/2605) 2025-12-01 12:33:38,031 [INFO] Skipping bill 1935238 - already processed (969/2605) 2025-12-01 12:33:38,031 [INFO] Skipping bill 1908945 - already processed (970/2605) 2025-12-01 12:33:38,031 [INFO] Skipping bill 1980982 - already processed (971/2605) 2025-12-01 12:33:38,031 [INFO] Skipping bill 1934094 - already processed (972/2605) 2025-12-01 12:33:38,031 [INFO] Skipping bill 1931194 - already processed (973/2605) 2025-12-01 12:33:38,031 [INFO] Skipping bill 1915534 - already processed (974/2605) 2025-12-01 12:33:38,031 [INFO] Skipping bill 1927914 - already processed (975/2605) 2025-12-01 12:33:38,031 [INFO] Skipping bill 1710815 - already processed (976/2605) 2025-12-01 12:33:38,032 [INFO] Skipping bill 1748189 - already processed (977/2605) 2025-12-01 12:33:38,032 [INFO] Skipping bill 1746365 - already processed (978/2605) 2025-12-01 12:33:38,032 [INFO] Skipping bill 1965229 - already processed (979/2605) 2025-12-01 12:33:38,032 [INFO] Skipping bill 1999738 - already processed (980/2605) 2025-12-01 12:33:38,032 [INFO] Skipping bill 1989648 - already processed (981/2605) 2025-12-01 12:33:38,032 [INFO] Skipping bill 1946188 - already processed (982/2605) 2025-12-01 12:33:38,032 [INFO] Skipping bill 1892638 - already processed (983/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1944647 - already processed (984/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1983017 - already processed (985/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1954626 - already processed (986/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1977147 - already processed (987/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 2013424 - already processed (988/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 2013451 - already processed (989/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1953001 - already processed (990/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1982880 - already processed (991/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1989793 - already processed (992/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1954479 - already processed (993/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 2031601 - already processed (994/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 2009433 - already processed (995/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1901514 - already processed (996/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1651925 - already processed (997/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1793373 - already processed (998/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1793039 - already processed (999/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1792971 - already processed (1000/2605) 2025-12-01 12:33:38,033 [INFO] Skipping bill 1793409 - already processed (1001/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1793958 - already processed (1002/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1793284 - already processed (1003/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1938552 - already processed (1004/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1922870 - already processed (1005/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1803710 - already processed (1006/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1889722 - already processed (1007/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1892083 - already processed (1008/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1889346 - already processed (1009/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1889719 - already processed (1010/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1889335 - already processed (1011/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1897572 - already processed (1012/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1887538 - already processed (1013/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1887101 - already processed (1014/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1888624 - already processed (1015/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1877673 - already processed (1016/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1897803 - already processed (1017/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1889758 - already processed (1018/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1897565 - already processed (1019/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1853521 - already processed (1020/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1864839 - already processed (1021/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1879513 - already processed (1022/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1878078 - already processed (1023/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 2013662 - already processed (1024/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1897603 - already processed (1025/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1881186 - already processed (1026/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1983797 - already processed (1027/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 2023789 - already processed (1028/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1878049 - already processed (1029/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 2052496 - already processed (1030/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1807241 - already processed (1031/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1881870 - already processed (1032/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1881843 - already processed (1033/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 2030230 - already processed (1034/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 2022901 - already processed (1035/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1896879 - already processed (1036/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1889701 - already processed (1037/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1970250 - already processed (1038/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 2037153 - already processed (1039/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 2013635 - already processed (1040/2605) 2025-12-01 12:33:38,034 [INFO] Skipping bill 1883140 - already processed (1041/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1853367 - already processed (1042/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1801284 - already processed (1043/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1889518 - already processed (1044/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1888073 - already processed (1045/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2052173 - already processed (1046/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2047520 - already processed (1047/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1889754 - already processed (1048/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1835303 - already processed (1049/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1949479 - already processed (1050/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2022816 - already processed (1051/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1872559 - already processed (1052/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1875857 - already processed (1053/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1876467 - already processed (1054/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1876586 - already processed (1055/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2038328 - already processed (1056/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1878887 - already processed (1057/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1853095 - already processed (1058/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1805407 - already processed (1059/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2022907 - already processed (1060/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1949574 - already processed (1061/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1844841 - already processed (1062/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1864295 - already processed (1063/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1881176 - already processed (1064/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1837365 - already processed (1065/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1837180 - already processed (1066/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1887099 - already processed (1067/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2028679 - already processed (1068/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2030354 - already processed (1069/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1882474 - already processed (1070/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1964010 - already processed (1071/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2008967 - already processed (1072/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1881178 - already processed (1073/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 2037324 - already processed (1074/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1806224 - already processed (1075/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1837135 - already processed (1076/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1805930 - already processed (1077/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1803406 - already processed (1078/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1883773 - already processed (1079/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1994137 - already processed (1080/2605) 2025-12-01 12:33:38,035 [INFO] Skipping bill 1881306 - already processed (1081/2605) 2025-12-01 12:33:38,036 [INFO] Skipping bill 1889726 - already processed (1082/2605) 2025-12-01 12:33:38,036 [INFO] Skipping bill 1889593 - already processed (1083/2605) 2025-12-01 12:33:38,036 [INFO] Processing 1084/2605: Bill ID 1883494 2025-12-01 12:33:38,849 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:38,850 [ERROR] Failed to generate report for bill 1883494: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:39,864 [INFO] Processing 1085/2605: Bill ID 1883535 2025-12-01 12:33:40,590 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:40,591 [ERROR] Failed to generate report for bill 1883535: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:41,602 [INFO] Processing 1086/2605: Bill ID 2038569 2025-12-01 12:33:42,538 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:42,540 [ERROR] Failed to generate report for bill 2038569: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:43,555 [INFO] Processing 1087/2605: Bill ID 2038571 2025-12-01 12:33:44,390 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:44,391 [ERROR] Failed to generate report for bill 2038571: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:45,408 [INFO] Skipping bill 1666814 - already processed (1088/2605) 2025-12-01 12:33:45,408 [INFO] Skipping bill 1722011 - already processed (1089/2605) 2025-12-01 12:33:45,408 [INFO] Skipping bill 1724398 - already processed (1090/2605) 2025-12-01 12:33:45,408 [INFO] Skipping bill 1676083 - already processed (1091/2605) 2025-12-01 12:33:45,409 [INFO] Skipping bill 1824011 - already processed (1092/2605) 2025-12-01 12:33:45,409 [INFO] Skipping bill 1824228 - already processed (1093/2605) 2025-12-01 12:33:45,409 [INFO] Skipping bill 1824028 - already processed (1094/2605) 2025-12-01 12:33:45,409 [INFO] Skipping bill 1834441 - already processed (1095/2605) 2025-12-01 12:33:45,409 [INFO] Skipping bill 1908238 - already processed (1096/2605) 2025-12-01 12:33:45,409 [INFO] Skipping bill 1967640 - already processed (1097/2605) 2025-12-01 12:33:45,409 [INFO] Skipping bill 1935448 - already processed (1098/2605) 2025-12-01 12:33:45,410 [INFO] Skipping bill 1987611 - already processed (1099/2605) 2025-12-01 12:33:45,410 [INFO] Skipping bill 1964156 - already processed (1100/2605) 2025-12-01 12:33:45,410 [INFO] Skipping bill 1947221 - already processed (1101/2605) 2025-12-01 12:33:45,410 [INFO] Skipping bill 1943110 - already processed (1102/2605) 2025-12-01 12:33:45,410 [INFO] Skipping bill 1964415 - already processed (1103/2605) 2025-12-01 12:33:45,410 [INFO] Skipping bill 1996731 - already processed (1104/2605) 2025-12-01 12:33:45,410 [INFO] Skipping bill 1944685 - already processed (1105/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1936020 - already processed (1106/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1947285 - already processed (1107/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1949498 - already processed (1108/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1933085 - already processed (1109/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1881403 - already processed (1110/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1878440 - already processed (1111/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1874641 - already processed (1112/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1780447 - already processed (1113/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1829313 - already processed (1114/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1876168 - already processed (1115/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1878357 - already processed (1116/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1801087 - already processed (1117/2605) 2025-12-01 12:33:45,411 [INFO] Skipping bill 1878533 - already processed (1118/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1781971 - already processed (1119/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1836944 - already processed (1120/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1773855 - already processed (1121/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1774758 - already processed (1122/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1779189 - already processed (1123/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1780403 - already processed (1124/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1882902 - already processed (1125/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1761023 - already processed (1126/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1763282 - already processed (1127/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1756406 - already processed (1128/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1721336 - already processed (1129/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1865663 - already processed (1130/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1884682 - already processed (1131/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1879124 - already processed (1132/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1813023 - already processed (1133/2605) 2025-12-01 12:33:45,412 [INFO] Skipping bill 1780572 - already processed (1134/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1796023 - already processed (1135/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1796213 - already processed (1136/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1841005 - already processed (1137/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1861287 - already processed (1138/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1878752 - already processed (1139/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1813101 - already processed (1140/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1768635 - already processed (1141/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1767924 - already processed (1142/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1641754 - already processed (1143/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1882889 - already processed (1144/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1729291 - already processed (1145/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1773906 - already processed (1146/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1839957 - already processed (1147/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1843965 - already processed (1148/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1879710 - already processed (1149/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1763606 - already processed (1150/2605) 2025-12-01 12:33:45,413 [INFO] Skipping bill 1780432 - already processed (1151/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1812765 - already processed (1152/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1836858 - already processed (1153/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1864293 - already processed (1154/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1770114 - already processed (1155/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1733127 - already processed (1156/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1762026 - already processed (1157/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1829537 - already processed (1158/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1878142 - already processed (1159/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1880765 - already processed (1160/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1762041 - already processed (1161/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1646230 - already processed (1162/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1762213 - already processed (1163/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1779393 - already processed (1164/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1878544 - already processed (1165/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1780459 - already processed (1166/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1781963 - already processed (1167/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1758293 - already processed (1168/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1768495 - already processed (1169/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1773860 - already processed (1170/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1864226 - already processed (1171/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1878400 - already processed (1172/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1879652 - already processed (1173/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1865798 - already processed (1174/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1862795 - already processed (1175/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1710243 - already processed (1176/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1818495 - already processed (1177/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1775864 - already processed (1178/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1856196 - already processed (1179/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1791835 - already processed (1180/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1658709 - already processed (1181/2605) 2025-12-01 12:33:45,414 [INFO] Skipping bill 1695187 - already processed (1182/2605) 2025-12-01 12:33:45,414 [INFO] Processing 1183/2605: Bill ID 1818780 2025-12-01 12:33:45,917 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:45,918 [ERROR] Failed to generate report for bill 1818780: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:46,931 [INFO] Processing 1184/2605: Bill ID 1818766 2025-12-01 12:33:47,553 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:47,555 [ERROR] Failed to generate report for bill 1818766: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:48,568 [INFO] Skipping bill 1752559 - already processed (1185/2605) 2025-12-01 12:33:48,569 [INFO] Skipping bill 1882942 - already processed (1186/2605) 2025-12-01 12:33:48,569 [INFO] Skipping bill 1766908 - already processed (1187/2605) 2025-12-01 12:33:48,569 [INFO] Skipping bill 1691064 - already processed (1188/2605) 2025-12-01 12:33:48,569 [INFO] Processing 1189/2605: Bill ID 1690030 2025-12-01 12:33:50,216 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:50,219 [ERROR] Failed to generate report for bill 1690030: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:51,235 [INFO] Processing 1190/2605: Bill ID 1690727 2025-12-01 12:33:52,675 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:52,677 [ERROR] Failed to generate report for bill 1690727: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:52,735 [INFO] Saved 2596 reports to data/bill_reports.json 2025-12-01 12:33:52,735 [INFO] Progress: 1190/2605 - Processed: 0, Skipped: 1139, Errors: 51 2025-12-01 12:33:53,740 [INFO] Processing 1191/2605: Bill ID 1875409 2025-12-01 12:33:56,977 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:33:56,978 [ERROR] Failed to generate report for bill 1875409: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:33:57,987 [INFO] Processing 1192/2605: Bill ID 1835820 2025-12-01 12:34:01,587 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:01,589 [ERROR] Failed to generate report for bill 1835820: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:02,600 [INFO] Processing 1193/2605: Bill ID 1818459 2025-12-01 12:34:05,476 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:05,479 [ERROR] Failed to generate report for bill 1818459: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:06,497 [INFO] Skipping bill 2009915 - already processed (1194/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1917775 - already processed (1195/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1902981 - already processed (1196/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1908626 - already processed (1197/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1903647 - already processed (1198/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1993863 - already processed (1199/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 2015656 - already processed (1200/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1909120 - already processed (1201/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 2032707 - already processed (1202/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 2030838 - already processed (1203/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 2033110 - already processed (1204/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1992712 - already processed (1205/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 2010112 - already processed (1206/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 2035218 - already processed (1207/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1970759 - already processed (1208/2605) 2025-12-01 12:34:06,497 [INFO] Skipping bill 1917262 - already processed (1209/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2015645 - already processed (1210/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1941920 - already processed (1211/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2041695 - already processed (1212/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2038940 - already processed (1213/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2043998 - already processed (1214/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1903496 - already processed (1215/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1942114 - already processed (1216/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1948978 - already processed (1217/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2025948 - already processed (1218/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2030449 - already processed (1219/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2012463 - already processed (1220/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2036382 - already processed (1221/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1901571 - already processed (1222/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1902589 - already processed (1223/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2045075 - already processed (1224/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2042397 - already processed (1225/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2005892 - already processed (1226/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1995988 - already processed (1227/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1941987 - already processed (1228/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2051432 - already processed (1229/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2030765 - already processed (1230/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1900450 - already processed (1231/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2032658 - already processed (1232/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1934862 - already processed (1233/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1954914 - already processed (1234/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1908970 - already processed (1235/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 2046810 - already processed (1236/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1911503 - already processed (1237/2605) 2025-12-01 12:34:06,498 [INFO] Skipping bill 1917449 - already processed (1238/2605) 2025-12-01 12:34:06,499 [INFO] Skipping bill 2012421 - already processed (1239/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 2036409 - already processed (1240/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 1930912 - already processed (1241/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 2015571 - already processed (1242/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 1991849 - already processed (1243/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 1909237 - already processed (1244/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 1907396 - already processed (1245/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 2032681 - already processed (1246/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 2031449 - already processed (1247/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 2036417 - already processed (1248/2605) 2025-12-01 12:34:06,504 [INFO] Skipping bill 2010242 - already processed (1249/2605) 2025-12-01 12:34:06,505 [INFO] Skipping bill 1902485 - already processed (1250/2605) 2025-12-01 12:34:06,505 [INFO] Skipping bill 2044029 - already processed (1251/2605) 2025-12-01 12:34:06,505 [INFO] Skipping bill 2039479 - already processed (1252/2605) 2025-12-01 12:34:06,505 [INFO] Skipping bill 1993679 - already processed (1253/2605) 2025-12-01 12:34:06,505 [INFO] Skipping bill 1927014 - already processed (1254/2605) 2025-12-01 12:34:06,505 [INFO] Processing 1255/2605: Bill ID 2053531 2025-12-01 12:34:15,409 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:34:15,420 [INFO] Skipping bill 2012390 - already processed (1256/2605) 2025-12-01 12:34:15,421 [INFO] Skipping bill 2051443 - already processed (1257/2605) 2025-12-01 12:34:15,421 [INFO] Skipping bill 1967476 - already processed (1258/2605) 2025-12-01 12:34:15,421 [INFO] Skipping bill 2039584 - already processed (1259/2605) 2025-12-01 12:34:15,421 [INFO] Skipping bill 1941925 - already processed (1260/2605) 2025-12-01 12:34:15,421 [INFO] Skipping bill 2039602 - already processed (1261/2605) 2025-12-01 12:34:15,421 [INFO] Skipping bill 2021091 - already processed (1262/2605) 2025-12-01 12:34:15,421 [INFO] Processing 1263/2605: Bill ID 2053730 2025-12-01 12:34:26,594 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:34:26,604 [INFO] Skipping bill 1993748 - already processed (1264/2605) 2025-12-01 12:34:26,604 [INFO] Skipping bill 1907408 - already processed (1265/2605) 2025-12-01 12:34:26,604 [INFO] Skipping bill 2043429 - already processed (1266/2605) 2025-12-01 12:34:26,604 [INFO] Skipping bill 2036445 - already processed (1267/2605) 2025-12-01 12:34:26,604 [INFO] Skipping bill 1948575 - already processed (1268/2605) 2025-12-01 12:34:26,604 [INFO] Skipping bill 2020539 - already processed (1269/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1941981 - already processed (1270/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1985057 - already processed (1271/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 2012554 - already processed (1272/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1900469 - already processed (1273/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1949091 - already processed (1274/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1903302 - already processed (1275/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 2031820 - already processed (1276/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1986509 - already processed (1277/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1992147 - already processed (1278/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1908565 - already processed (1279/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 2018195 - already processed (1280/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1948655 - already processed (1281/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1926957 - already processed (1282/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 2007650 - already processed (1283/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1938062 - already processed (1284/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1909167 - already processed (1285/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1910683 - already processed (1286/2605) 2025-12-01 12:34:26,605 [INFO] Skipping bill 1918276 - already processed (1287/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1942634 - already processed (1288/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1947885 - already processed (1289/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 2034828 - already processed (1290/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 2035534 - already processed (1291/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1937370 - already processed (1292/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 2036328 - already processed (1293/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1940048 - already processed (1294/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1990212 - already processed (1295/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1995017 - already processed (1296/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1937257 - already processed (1297/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1900853 - already processed (1298/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1947971 - already processed (1299/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1920984 - already processed (1300/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1902725 - already processed (1301/2605) 2025-12-01 12:34:26,606 [INFO] Skipping bill 1964016 - already processed (1302/2605) 2025-12-01 12:34:26,606 [INFO] Processing 1303/2605: Bill ID 1934576 2025-12-01 12:34:27,108 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:27,109 [ERROR] Failed to generate report for bill 1934576: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:28,117 [INFO] Skipping bill 1898800 - already processed (1304/2605) 2025-12-01 12:34:28,118 [INFO] Skipping bill 1971511 - already processed (1305/2605) 2025-12-01 12:34:28,118 [INFO] Processing 1306/2605: Bill ID 1935197 2025-12-01 12:34:28,626 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:28,627 [ERROR] Failed to generate report for bill 1935197: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:29,636 [INFO] Processing 1307/2605: Bill ID 1935040 2025-12-01 12:34:30,152 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:30,153 [ERROR] Failed to generate report for bill 1935040: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:31,162 [INFO] Skipping bill 1948521 - already processed (1308/2605) 2025-12-01 12:34:31,163 [INFO] Skipping bill 1977652 - already processed (1309/2605) 2025-12-01 12:34:31,163 [INFO] Processing 1310/2605: Bill ID 1934805 2025-12-01 12:34:31,594 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:31,595 [ERROR] Failed to generate report for bill 1934805: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:31,641 [INFO] Saved 2598 reports to data/bill_reports.json 2025-12-01 12:34:31,642 [INFO] Progress: 1310/2605 - Processed: 2, Skipped: 1250, Errors: 58 2025-12-01 12:34:32,647 [INFO] Skipping bill 1934970 - already processed (1311/2605) 2025-12-01 12:34:32,647 [INFO] Skipping bill 1934701 - already processed (1312/2605) 2025-12-01 12:34:32,647 [INFO] Skipping bill 1942260 - already processed (1313/2605) 2025-12-01 12:34:32,647 [INFO] Skipping bill 1917391 - already processed (1314/2605) 2025-12-01 12:34:32,648 [INFO] Processing 1315/2605: Bill ID 1935190 2025-12-01 12:34:35,687 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:35,692 [ERROR] Failed to generate report for bill 1935190: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:36,703 [INFO] Processing 1316/2605: Bill ID 1934636 2025-12-01 12:34:38,560 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:38,562 [ERROR] Failed to generate report for bill 1934636: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:39,569 [INFO] Processing 1317/2605: Bill ID 1935223 2025-12-01 12:34:41,230 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:41,232 [ERROR] Failed to generate report for bill 1935223: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:42,238 [INFO] Processing 1318/2605: Bill ID 1934824 2025-12-01 12:34:44,968 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:44,970 [ERROR] Failed to generate report for bill 1934824: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:45,981 [INFO] Processing 1319/2605: Bill ID 2052596 2025-12-01 12:34:49,990 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:49,991 [ERROR] Failed to generate report for bill 2052596: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:51,000 [INFO] Skipping bill 1879932 - already processed (1320/2605) 2025-12-01 12:34:51,002 [INFO] Skipping bill 1875738 - already processed (1321/2605) 2025-12-01 12:34:51,003 [INFO] Skipping bill 1875815 - already processed (1322/2605) 2025-12-01 12:34:51,003 [INFO] Skipping bill 1701253 - already processed (1323/2605) 2025-12-01 12:34:51,003 [INFO] Skipping bill 1875615 - already processed (1324/2605) 2025-12-01 12:34:51,003 [INFO] Skipping bill 1754315 - already processed (1325/2605) 2025-12-01 12:34:51,004 [INFO] Skipping bill 1751005 - already processed (1326/2605) 2025-12-01 12:34:51,004 [INFO] Skipping bill 1875642 - already processed (1327/2605) 2025-12-01 12:34:51,004 [INFO] Skipping bill 1753811 - already processed (1328/2605) 2025-12-01 12:34:51,004 [INFO] Skipping bill 1752050 - already processed (1329/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 1704591 - already processed (1330/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 1748551 - already processed (1331/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 1725321 - already processed (1332/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 1725195 - already processed (1333/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2014434 - already processed (1334/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2014277 - already processed (1335/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2000124 - already processed (1336/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2022736 - already processed (1337/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2022881 - already processed (1338/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2014322 - already processed (1339/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2014068 - already processed (1340/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2005730 - already processed (1341/2605) 2025-12-01 12:34:51,005 [INFO] Skipping bill 2014594 - already processed (1342/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2013131 - already processed (1343/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2022220 - already processed (1344/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2008986 - already processed (1345/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2013796 - already processed (1346/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2014312 - already processed (1347/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2013903 - already processed (1348/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2013936 - already processed (1349/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2013868 - already processed (1350/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2014024 - already processed (1351/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2014377 - already processed (1352/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2017695 - already processed (1353/2605) 2025-12-01 12:34:51,006 [INFO] Skipping bill 2018632 - already processed (1354/2605) 2025-12-01 12:34:51,007 [INFO] Skipping bill 2022666 - already processed (1355/2605) 2025-12-01 12:34:51,007 [INFO] Skipping bill 2022828 - already processed (1356/2605) 2025-12-01 12:34:51,007 [INFO] Skipping bill 2015551 - already processed (1357/2605) 2025-12-01 12:34:51,007 [INFO] Skipping bill 2009244 - already processed (1358/2605) 2025-12-01 12:34:51,007 [INFO] Skipping bill 1969116 - already processed (1359/2605) 2025-12-01 12:34:51,007 [INFO] Skipping bill 2009761 - already processed (1360/2605) 2025-12-01 12:34:51,007 [INFO] Processing 1361/2605: Bill ID 2012916 2025-12-01 12:34:51,416 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:51,417 [ERROR] Failed to generate report for bill 2012916: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:52,424 [INFO] Skipping bill 1996111 - already processed (1362/2605) 2025-12-01 12:34:52,424 [INFO] Skipping bill 1656324 - already processed (1363/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1640560 - already processed (1364/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1644790 - already processed (1365/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1908973 - already processed (1366/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1930471 - already processed (1367/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1916131 - already processed (1368/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1916897 - already processed (1369/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1930219 - already processed (1370/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1916725 - already processed (1371/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1916697 - already processed (1372/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1921549 - already processed (1373/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1916032 - already processed (1374/2605) 2025-12-01 12:34:52,425 [INFO] Skipping bill 1915939 - already processed (1375/2605) 2025-12-01 12:34:52,426 [INFO] Skipping bill 1899315 - already processed (1376/2605) 2025-12-01 12:34:52,426 [INFO] Skipping bill 1930747 - already processed (1377/2605) 2025-12-01 12:34:52,426 [INFO] Skipping bill 1898936 - already processed (1378/2605) 2025-12-01 12:34:52,426 [INFO] Skipping bill 1828241 - already processed (1379/2605) 2025-12-01 12:34:52,427 [INFO] Skipping bill 1784887 - already processed (1380/2605) 2025-12-01 12:34:52,427 [INFO] Processing 1381/2605: Bill ID 1710984 2025-12-01 12:34:57,811 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:34:57,812 [ERROR] Failed to generate report for bill 1710984: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:34:58,822 [INFO] Processing 1382/2605: Bill ID 1710996 2025-12-01 12:35:01,322 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:01,327 [ERROR] Failed to generate report for bill 1710996: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:02,338 [INFO] Processing 1383/2605: Bill ID 1659671 2025-12-01 12:35:05,061 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:05,064 [ERROR] Failed to generate report for bill 1659671: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:06,076 [INFO] Skipping bill 2046561 - already processed (1384/2605) 2025-12-01 12:35:06,076 [INFO] Skipping bill 2018937 - already processed (1385/2605) 2025-12-01 12:35:06,076 [INFO] Skipping bill 2046538 - already processed (1386/2605) 2025-12-01 12:35:06,077 [INFO] Skipping bill 2038933 - already processed (1387/2605) 2025-12-01 12:35:06,077 [INFO] Skipping bill 2019064 - already processed (1388/2605) 2025-12-01 12:35:06,077 [INFO] Skipping bill 2051853 - already processed (1389/2605) 2025-12-01 12:35:06,078 [INFO] Skipping bill 1973495 - already processed (1390/2605) 2025-12-01 12:35:06,079 [INFO] Skipping bill 2044900 - already processed (1391/2605) 2025-12-01 12:35:06,079 [INFO] Skipping bill 2036911 - already processed (1392/2605) 2025-12-01 12:35:06,079 [INFO] Skipping bill 1956347 - already processed (1393/2605) 2025-12-01 12:35:06,079 [INFO] Skipping bill 2015680 - already processed (1394/2605) 2025-12-01 12:35:06,079 [INFO] Skipping bill 2035837 - already processed (1395/2605) 2025-12-01 12:35:06,079 [INFO] Skipping bill 2052361 - already processed (1396/2605) 2025-12-01 12:35:06,079 [INFO] Skipping bill 2053186 - already processed (1397/2605) 2025-12-01 12:35:06,079 [INFO] Skipping bill 1956501 - already processed (1398/2605) 2025-12-01 12:35:06,079 [INFO] Processing 1399/2605: Bill ID 1966320 2025-12-01 12:35:10,379 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:10,382 [ERROR] Failed to generate report for bill 1966320: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:11,388 [INFO] Processing 1400/2605: Bill ID 2044413 2025-12-01 12:35:12,243 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:12,245 [ERROR] Failed to generate report for bill 2044413: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281182 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281182 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:12,292 [INFO] Saved 2598 reports to data/bill_reports.json 2025-12-01 12:35:12,292 [INFO] Progress: 1400/2605 - Processed: 2, Skipped: 1329, Errors: 69 2025-12-01 12:35:13,297 [INFO] Processing 1401/2605: Bill ID 2031116 2025-12-01 12:35:14,151 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:14,153 [ERROR] Failed to generate report for bill 2031116: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:15,162 [INFO] Skipping bill 1820171 - already processed (1402/2605) 2025-12-01 12:35:15,162 [INFO] Skipping bill 1820684 - already processed (1403/2605) 2025-12-01 12:35:15,163 [INFO] Skipping bill 1820075 - already processed (1404/2605) 2025-12-01 12:35:15,163 [INFO] Skipping bill 1820478 - already processed (1405/2605) 2025-12-01 12:35:15,163 [INFO] Skipping bill 1820697 - already processed (1406/2605) 2025-12-01 12:35:15,164 [INFO] Skipping bill 1821348 - already processed (1407/2605) 2025-12-01 12:35:15,164 [INFO] Skipping bill 1819421 - already processed (1408/2605) 2025-12-01 12:35:15,164 [INFO] Skipping bill 1820795 - already processed (1409/2605) 2025-12-01 12:35:15,164 [INFO] Skipping bill 1814318 - already processed (1410/2605) 2025-12-01 12:35:15,164 [INFO] Skipping bill 1814441 - already processed (1411/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1791289 - already processed (1412/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1789468 - already processed (1413/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1924199 - already processed (1414/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1920208 - already processed (1415/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1920320 - already processed (1416/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1923586 - already processed (1417/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1918327 - already processed (1418/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1922702 - already processed (1419/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1923122 - already processed (1420/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1924269 - already processed (1421/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1925220 - already processed (1422/2605) 2025-12-01 12:35:15,165 [INFO] Skipping bill 1924640 - already processed (1423/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1924912 - already processed (1424/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1900252 - already processed (1425/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 2018241 - already processed (1426/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1920876 - already processed (1427/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1920720 - already processed (1428/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1925546 - already processed (1429/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1903378 - already processed (1430/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1921990 - already processed (1431/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1922805 - already processed (1432/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1922842 - already processed (1433/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1836006 - already processed (1434/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1836109 - already processed (1435/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1843504 - already processed (1436/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1973003 - already processed (1437/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 2009609 - already processed (1438/2605) 2025-12-01 12:35:15,166 [INFO] Skipping bill 1986214 - already processed (1439/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1912749 - already processed (1440/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1914095 - already processed (1441/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1914598 - already processed (1442/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1913104 - already processed (1443/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1914569 - already processed (1444/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1930373 - already processed (1445/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1982090 - already processed (1446/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1914274 - already processed (1447/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1982120 - already processed (1448/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1773806 - already processed (1449/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1880673 - already processed (1450/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1724997 - already processed (1451/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1775230 - already processed (1452/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1889846 - already processed (1453/2605) 2025-12-01 12:35:15,167 [INFO] Skipping bill 1773451 - already processed (1454/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1759469 - already processed (1455/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1777407 - already processed (1456/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1880554 - already processed (1457/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1854268 - already processed (1458/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1771135 - already processed (1459/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1830478 - already processed (1460/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1780085 - already processed (1461/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1858003 - already processed (1462/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1880735 - already processed (1463/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1882950 - already processed (1464/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1878925 - already processed (1465/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1878252 - already processed (1466/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1884263 - already processed (1467/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1873862 - already processed (1468/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1882265 - already processed (1469/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1771247 - already processed (1470/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1836612 - already processed (1471/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1820748 - already processed (1472/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1886418 - already processed (1473/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1769931 - already processed (1474/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1740020 - already processed (1475/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1878961 - already processed (1476/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1768592 - already processed (1477/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2045757 - already processed (1478/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2030536 - already processed (1479/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2047301 - already processed (1480/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2039357 - already processed (1481/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2034685 - already processed (1482/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2037642 - already processed (1483/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2022168 - already processed (1484/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2052644 - already processed (1485/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2051282 - already processed (1486/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 1937863 - already processed (1487/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2043639 - already processed (1488/2605) 2025-12-01 12:35:15,168 [INFO] Skipping bill 2012593 - already processed (1489/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 1991206 - already processed (1490/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 1947924 - already processed (1491/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2012408 - already processed (1492/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2021116 - already processed (1493/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 1973751 - already processed (1494/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2045246 - already processed (1495/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 1910852 - already processed (1496/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 1956391 - already processed (1497/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2023404 - already processed (1498/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2035307 - already processed (1499/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 1944456 - already processed (1500/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2041064 - already processed (1501/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2039278 - already processed (1502/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2041823 - already processed (1503/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 1946034 - already processed (1504/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 2038442 - already processed (1505/2605) 2025-12-01 12:35:15,169 [INFO] Skipping bill 1905925 - already processed (1506/2605) 2025-12-01 12:35:15,169 [INFO] Processing 1507/2605: Bill ID 2041076 2025-12-01 12:35:15,675 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:15,678 [ERROR] Failed to generate report for bill 2041076: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:16,687 [INFO] Processing 1508/2605: Bill ID 2037948 2025-12-01 12:35:17,167 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:17,168 [ERROR] Failed to generate report for bill 2037948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:18,177 [INFO] Skipping bill 1757100 - already processed (1509/2605) 2025-12-01 12:35:18,177 [INFO] Skipping bill 1766918 - already processed (1510/2605) 2025-12-01 12:35:18,177 [INFO] Skipping bill 1691606 - already processed (1511/2605) 2025-12-01 12:35:18,177 [INFO] Skipping bill 1757087 - already processed (1512/2605) 2025-12-01 12:35:18,177 [INFO] Skipping bill 1691984 - already processed (1513/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1724146 - already processed (1514/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1811367 - already processed (1515/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1864559 - already processed (1516/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1833383 - already processed (1517/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1839979 - already processed (1518/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1863636 - already processed (1519/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1866932 - already processed (1520/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1829566 - already processed (1521/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1858179 - already processed (1522/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1857154 - already processed (1523/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1866872 - already processed (1524/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1844272 - already processed (1525/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1875576 - already processed (1526/2605) 2025-12-01 12:35:18,178 [INFO] Skipping bill 1875933 - already processed (1527/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1844730 - already processed (1528/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1858971 - already processed (1529/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1870027 - already processed (1530/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1994761 - already processed (1531/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1935080 - already processed (1532/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1945535 - already processed (1533/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1979504 - already processed (1534/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1937835 - already processed (1535/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1918971 - already processed (1536/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1986390 - already processed (1537/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1945988 - already processed (1538/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1940828 - already processed (1539/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1986602 - already processed (1540/2605) 2025-12-01 12:35:18,179 [INFO] Skipping bill 1988979 - already processed (1541/2605) 2025-12-01 12:35:18,180 [INFO] Skipping bill 2008057 - already processed (1542/2605) 2025-12-01 12:35:18,180 [INFO] Skipping bill 1986556 - already processed (1543/2605) 2025-12-01 12:35:18,180 [INFO] Skipping bill 1986569 - already processed (1544/2605) 2025-12-01 12:35:18,180 [INFO] Skipping bill 1988788 - already processed (1545/2605) 2025-12-01 12:35:18,180 [INFO] Skipping bill 2028551 - already processed (1546/2605) 2025-12-01 12:35:18,180 [INFO] Skipping bill 1937524 - already processed (1547/2605) 2025-12-01 12:35:18,180 [INFO] Skipping bill 1966994 - already processed (1548/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 2030023 - already processed (1549/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1988713 - already processed (1550/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1988914 - already processed (1551/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 2030055 - already processed (1552/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1666116 - already processed (1553/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1792231 - already processed (1554/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1802681 - already processed (1555/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1921522 - already processed (1556/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1999928 - already processed (1557/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 2022730 - already processed (1558/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 2024009 - already processed (1559/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1895318 - already processed (1560/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1944028 - already processed (1561/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1954350 - already processed (1562/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1954733 - already processed (1563/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 2029172 - already processed (1564/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1944096 - already processed (1565/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1895182 - already processed (1566/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1919972 - already processed (1567/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1895637 - already processed (1568/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1819620 - already processed (1569/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1811138 - already processed (1570/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1948251 - already processed (1571/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1901594 - already processed (1572/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1833554 - already processed (1573/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1833050 - already processed (1574/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1830912 - already processed (1575/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1834207 - already processed (1576/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1795187 - already processed (1577/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1828458 - already processed (1578/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1808304 - already processed (1579/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1834240 - already processed (1580/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1831671 - already processed (1581/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1832378 - already processed (1582/2605) 2025-12-01 12:35:18,181 [INFO] Skipping bill 1828742 - already processed (1583/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1833429 - already processed (1584/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1828784 - already processed (1585/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1825620 - already processed (1586/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1799785 - already processed (1587/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1832466 - already processed (1588/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1831669 - already processed (1589/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1832147 - already processed (1590/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1831971 - already processed (1591/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1832437 - already processed (1592/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1828244 - already processed (1593/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1833731 - already processed (1594/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1833264 - already processed (1595/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1833393 - already processed (1596/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1825869 - already processed (1597/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1825916 - already processed (1598/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1873399 - already processed (1599/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1826595 - already processed (1600/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1832185 - already processed (1601/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1832434 - already processed (1602/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1831535 - already processed (1603/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1834179 - already processed (1604/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1834106 - already processed (1605/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1946381 - already processed (1606/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1953992 - already processed (1607/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1948149 - already processed (1608/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1959470 - already processed (1609/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1946783 - already processed (1610/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1955110 - already processed (1611/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1959302 - already processed (1612/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1959458 - already processed (1613/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1960722 - already processed (1614/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1951003 - already processed (1615/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1954702 - already processed (1616/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1954311 - already processed (1617/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1959312 - already processed (1618/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1959377 - already processed (1619/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1954015 - already processed (1620/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1954357 - already processed (1621/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1944274 - already processed (1622/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1944487 - already processed (1623/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1959723 - already processed (1624/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1960832 - already processed (1625/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1971015 - already processed (1626/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1971366 - already processed (1627/2605) 2025-12-01 12:35:18,182 [INFO] Skipping bill 1733375 - already processed (1628/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1700527 - already processed (1629/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1719413 - already processed (1630/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1694457 - already processed (1631/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1744060 - already processed (1632/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1727826 - already processed (1633/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1743424 - already processed (1634/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1732248 - already processed (1635/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1731629 - already processed (1636/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1769317 - already processed (1637/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1747471 - already processed (1638/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1747557 - already processed (1639/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1710763 - already processed (1640/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1782999 - already processed (1641/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1781207 - already processed (1642/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1726065 - already processed (1643/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1898826 - already processed (1644/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1992725 - already processed (1645/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1988473 - already processed (1646/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1970030 - already processed (1647/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2007109 - already processed (1648/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1891805 - already processed (1649/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1949957 - already processed (1650/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1990181 - already processed (1651/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1991711 - already processed (1652/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1897779 - already processed (1653/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2006851 - already processed (1654/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1975361 - already processed (1655/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1987235 - already processed (1656/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2007736 - already processed (1657/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2000200 - already processed (1658/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1923991 - already processed (1659/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1892858 - already processed (1660/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2000248 - already processed (1661/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1971072 - already processed (1662/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2008077 - already processed (1663/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1907668 - already processed (1664/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1962916 - already processed (1665/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2005286 - already processed (1666/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2005181 - already processed (1667/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1891063 - already processed (1668/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1900186 - already processed (1669/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1994657 - already processed (1670/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 2008307 - already processed (1671/2605) 2025-12-01 12:35:18,183 [INFO] Skipping bill 1991260 - already processed (1672/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2006384 - already processed (1673/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2002051 - already processed (1674/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1973236 - already processed (1675/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2007316 - already processed (1676/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1890894 - already processed (1677/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2000178 - already processed (1678/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1982970 - already processed (1679/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2006497 - already processed (1680/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1890775 - already processed (1681/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1892224 - already processed (1682/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1954141 - already processed (1683/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2006579 - already processed (1684/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2006128 - already processed (1685/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2024097 - already processed (1686/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2034878 - already processed (1687/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1891396 - already processed (1688/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2040103 - already processed (1689/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2041986 - already processed (1690/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1987712 - already processed (1691/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2005998 - already processed (1692/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2008318 - already processed (1693/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1892843 - already processed (1694/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1946392 - already processed (1695/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1971169 - already processed (1696/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1890786 - already processed (1697/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1891256 - already processed (1698/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1942882 - already processed (1699/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2031981 - already processed (1700/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2033602 - already processed (1701/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2034279 - already processed (1702/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1974704 - already processed (1703/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1950849 - already processed (1704/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1975022 - already processed (1705/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1981850 - already processed (1706/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1890492 - already processed (1707/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2020803 - already processed (1708/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2005343 - already processed (1709/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1890466 - already processed (1710/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1975612 - already processed (1711/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1994176 - already processed (1712/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1990550 - already processed (1713/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1891411 - already processed (1714/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1983542 - already processed (1715/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 1999872 - already processed (1716/2605) 2025-12-01 12:35:18,184 [INFO] Skipping bill 2007449 - already processed (1717/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 2039972 - already processed (1718/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 1892428 - already processed (1719/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 1891501 - already processed (1720/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 2007840 - already processed (1721/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 1976041 - already processed (1722/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 1992763 - already processed (1723/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 1993770 - already processed (1724/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 2007872 - already processed (1725/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 1936766 - already processed (1726/2605) 2025-12-01 12:35:18,185 [INFO] Skipping bill 1676049 - already processed (1727/2605) 2025-12-01 12:35:18,185 [INFO] Processing 1728/2605: Bill ID 1704512 2025-12-01 12:35:18,681 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:18,682 [ERROR] Failed to generate report for bill 1704512: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:19,692 [INFO] Skipping bill 1828750 - already processed (1729/2605) 2025-12-01 12:35:19,693 [INFO] Skipping bill 1823594 - already processed (1730/2605) 2025-12-01 12:35:19,693 [INFO] Skipping bill 1820331 - already processed (1731/2605) 2025-12-01 12:35:19,693 [INFO] Skipping bill 1810219 - already processed (1732/2605) 2025-12-01 12:35:19,693 [INFO] Skipping bill 1813477 - already processed (1733/2605) 2025-12-01 12:35:19,693 [INFO] Skipping bill 1858814 - already processed (1734/2605) 2025-12-01 12:35:19,694 [INFO] Skipping bill 1882805 - already processed (1735/2605) 2025-12-01 12:35:19,694 [INFO] Skipping bill 1811586 - already processed (1736/2605) 2025-12-01 12:35:19,694 [INFO] Skipping bill 1794392 - already processed (1737/2605) 2025-12-01 12:35:19,694 [INFO] Processing 1738/2605: Bill ID 1844899 2025-12-01 12:35:20,205 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:20,206 [ERROR] Failed to generate report for bill 1844899: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:21,216 [INFO] Skipping bill 1954171 - already processed (1739/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1911041 - already processed (1740/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1963098 - already processed (1741/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1943827 - already processed (1742/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1968353 - already processed (1743/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1981617 - already processed (1744/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1995499 - already processed (1745/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1954569 - already processed (1746/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1950395 - already processed (1747/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1989323 - already processed (1748/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1904576 - already processed (1749/2605) 2025-12-01 12:35:21,217 [INFO] Skipping bill 1968434 - already processed (1750/2605) 2025-12-01 12:35:21,217 [INFO] Processing 1751/2605: Bill ID 2046115 2025-12-01 12:35:22,280 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:22,281 [ERROR] Failed to generate report for bill 2046115: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:23,290 [INFO] Skipping bill 1912099 - already processed (1752/2605) 2025-12-01 12:35:23,296 [INFO] Skipping bill 1946923 - already processed (1753/2605) 2025-12-01 12:35:23,296 [INFO] Processing 1754/2605: Bill ID 2046119 2025-12-01 12:35:24,121 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:24,124 [ERROR] Failed to generate report for bill 2046119: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:25,134 [INFO] Processing 1755/2605: Bill ID 1897901 2025-12-01 12:35:26,380 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:26,382 [ERROR] Failed to generate report for bill 1897901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:27,393 [INFO] Processing 1756/2605: Bill ID 1948482 2025-12-01 12:35:28,318 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:28,321 [ERROR] Failed to generate report for bill 1948482: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:29,331 [INFO] Skipping bill 1800317 - already processed (1757/2605) 2025-12-01 12:35:29,331 [INFO] Skipping bill 1800156 - already processed (1758/2605) 2025-12-01 12:35:29,331 [INFO] Skipping bill 1854552 - already processed (1759/2605) 2025-12-01 12:35:29,331 [INFO] Skipping bill 1680053 - already processed (1760/2605) 2025-12-01 12:35:29,331 [INFO] Skipping bill 1682772 - already processed (1761/2605) 2025-12-01 12:35:29,332 [INFO] Skipping bill 1737434 - already processed (1762/2605) 2025-12-01 12:35:29,332 [INFO] Skipping bill 1981655 - already processed (1763/2605) 2025-12-01 12:35:29,332 [INFO] Skipping bill 1982851 - already processed (1764/2605) 2025-12-01 12:35:29,332 [INFO] Skipping bill 1934587 - already processed (1765/2605) 2025-12-01 12:35:29,332 [INFO] Skipping bill 1981303 - already processed (1766/2605) 2025-12-01 12:35:29,332 [INFO] Skipping bill 1983676 - already processed (1767/2605) 2025-12-01 12:35:29,332 [INFO] Skipping bill 1969845 - already processed (1768/2605) 2025-12-01 12:35:29,333 [INFO] Skipping bill 1983355 - already processed (1769/2605) 2025-12-01 12:35:29,333 [INFO] Skipping bill 2009795 - already processed (1770/2605) 2025-12-01 12:35:29,333 [INFO] Skipping bill 1973485 - already processed (1771/2605) 2025-12-01 12:35:29,333 [INFO] Skipping bill 1967494 - already processed (1772/2605) 2025-12-01 12:35:29,333 [INFO] Skipping bill 1973283 - already processed (1773/2605) 2025-12-01 12:35:29,333 [INFO] Skipping bill 1639846 - already processed (1774/2605) 2025-12-01 12:35:29,333 [INFO] Skipping bill 1646426 - already processed (1775/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1673591 - already processed (1776/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1639749 - already processed (1777/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1655379 - already processed (1778/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1630766 - already processed (1779/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1630878 - already processed (1780/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1630898 - already processed (1781/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1645265 - already processed (1782/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1650459 - already processed (1783/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1645172 - already processed (1784/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1630804 - already processed (1785/2605) 2025-12-01 12:35:29,334 [INFO] Skipping bill 1630761 - already processed (1786/2605) 2025-12-01 12:35:29,338 [INFO] Skipping bill 1652712 - already processed (1787/2605) 2025-12-01 12:35:29,338 [INFO] Skipping bill 1633968 - already processed (1788/2605) 2025-12-01 12:35:29,338 [INFO] Skipping bill 1644865 - already processed (1789/2605) 2025-12-01 12:35:29,338 [INFO] Skipping bill 1645061 - already processed (1790/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1809843 - already processed (1791/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1811981 - already processed (1792/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1812040 - already processed (1793/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1798563 - already processed (1794/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1807894 - already processed (1795/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1798580 - already processed (1796/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1800951 - already processed (1797/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1808295 - already processed (1798/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1799462 - already processed (1799/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1808024 - already processed (1800/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1807991 - already processed (1801/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1812376 - already processed (1802/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1822475 - already processed (1803/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1811644 - already processed (1804/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1794980 - already processed (1805/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1808264 - already processed (1806/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1801793 - already processed (1807/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1799221 - already processed (1808/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1822208 - already processed (1809/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1800673 - already processed (1810/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1809026 - already processed (1811/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1812182 - already processed (1812/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1886330 - already processed (1813/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1904645 - already processed (1814/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1911036 - already processed (1815/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1904674 - already processed (1816/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1901323 - already processed (1817/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1904347 - already processed (1818/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1925485 - already processed (1819/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1886222 - already processed (1820/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1905613 - already processed (1821/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1912330 - already processed (1822/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1914968 - already processed (1823/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1925408 - already processed (1824/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1886065 - already processed (1825/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1905445 - already processed (1826/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1905965 - already processed (1827/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1886188 - already processed (1828/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1905894 - already processed (1829/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1912145 - already processed (1830/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1927784 - already processed (1831/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1941702 - already processed (1832/2605) 2025-12-01 12:35:29,339 [INFO] Skipping bill 1929947 - already processed (1833/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1905942 - already processed (1834/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1912012 - already processed (1835/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1905698 - already processed (1836/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1886051 - already processed (1837/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1932239 - already processed (1838/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1932502 - already processed (1839/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1885937 - already processed (1840/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1900803 - already processed (1841/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1905712 - already processed (1842/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1905995 - already processed (1843/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1902641 - already processed (1844/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1905891 - already processed (1845/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1905860 - already processed (1846/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1908254 - already processed (1847/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1905920 - already processed (1848/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1886241 - already processed (1849/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1886007 - already processed (1850/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1896347 - already processed (1851/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1905982 - already processed (1852/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1898426 - already processed (1853/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1791614 - already processed (1854/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1792210 - already processed (1855/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1825997 - already processed (1856/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1792205 - already processed (1857/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1801141 - already processed (1858/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1796759 - already processed (1859/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1794124 - already processed (1860/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1680711 - already processed (1861/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1686234 - already processed (1862/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1813390 - already processed (1863/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1797745 - already processed (1864/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1810331 - already processed (1865/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1813358 - already processed (1866/2605) 2025-12-01 12:35:29,340 [INFO] Skipping bill 1657734 - already processed (1867/2605) 2025-12-01 12:35:29,340 [INFO] Processing 1868/2605: Bill ID 1644054 2025-12-01 12:35:30,678 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:30,680 [ERROR] Failed to generate report for bill 1644054: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:31,688 [INFO] Processing 1869/2605: Bill ID 1645282 2025-12-01 12:35:33,027 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:33,028 [ERROR] Failed to generate report for bill 1645282: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:34,038 [INFO] Processing 1870/2605: Bill ID 1644063 2025-12-01 12:35:34,665 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:34,666 [ERROR] Failed to generate report for bill 1644063: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:34,713 [INFO] Saved 2598 reports to data/bill_reports.json 2025-12-01 12:35:34,713 [INFO] Progress: 1870/2605 - Processed: 2, Skipped: 1787, Errors: 81 2025-12-01 12:35:35,719 [INFO] Processing 1871/2605: Bill ID 1645384 2025-12-01 12:35:36,306 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:36,307 [ERROR] Failed to generate report for bill 1645384: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:37,313 [INFO] Processing 1872/2605: Bill ID 1645468 2025-12-01 12:35:38,047 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:38,049 [ERROR] Failed to generate report for bill 1645468: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:39,060 [INFO] Processing 1873/2605: Bill ID 1796787 2025-12-01 12:35:40,196 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:40,198 [ERROR] Failed to generate report for bill 1796787: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:41,206 [INFO] Processing 1874/2605: Bill ID 1643905 2025-12-01 12:35:42,144 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:42,147 [ERROR] Failed to generate report for bill 1643905: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:43,152 [INFO] Processing 1875/2605: Bill ID 1796722 2025-12-01 12:35:44,396 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:44,398 [ERROR] Failed to generate report for bill 1796722: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:45,407 [INFO] Skipping bill 1952329 - already processed (1876/2605) 2025-12-01 12:35:45,407 [INFO] Skipping bill 1964254 - already processed (1877/2605) 2025-12-01 12:35:45,407 [INFO] Skipping bill 1904212 - already processed (1878/2605) 2025-12-01 12:35:45,407 [INFO] Skipping bill 1903879 - already processed (1879/2605) 2025-12-01 12:35:45,407 [INFO] Skipping bill 1930459 - already processed (1880/2605) 2025-12-01 12:35:45,408 [INFO] Skipping bill 1938736 - already processed (1881/2605) 2025-12-01 12:35:45,408 [INFO] Skipping bill 1941657 - already processed (1882/2605) 2025-12-01 12:35:45,408 [INFO] Skipping bill 1932498 - already processed (1883/2605) 2025-12-01 12:35:45,408 [INFO] Skipping bill 1898840 - already processed (1884/2605) 2025-12-01 12:35:45,408 [INFO] Skipping bill 1903962 - already processed (1885/2605) 2025-12-01 12:35:45,408 [INFO] Skipping bill 1943677 - already processed (1886/2605) 2025-12-01 12:35:45,409 [INFO] Skipping bill 1911202 - already processed (1887/2605) 2025-12-01 12:35:45,409 [INFO] Skipping bill 1898343 - already processed (1888/2605) 2025-12-01 12:35:45,409 [INFO] Skipping bill 1930701 - already processed (1889/2605) 2025-12-01 12:35:45,409 [INFO] Skipping bill 1911699 - already processed (1890/2605) 2025-12-01 12:35:45,409 [INFO] Skipping bill 1985707 - already processed (1891/2605) 2025-12-01 12:35:45,409 [INFO] Skipping bill 2025140 - already processed (1892/2605) 2025-12-01 12:35:45,409 [INFO] Processing 1893/2605: Bill ID 1916784 2025-12-01 12:35:46,136 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:46,137 [ERROR] Failed to generate report for bill 1916784: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:47,147 [INFO] Processing 1894/2605: Bill ID 1908012 2025-12-01 12:35:48,388 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:48,389 [ERROR] Failed to generate report for bill 1908012: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:49,397 [INFO] Processing 1895/2605: Bill ID 1907961 2025-12-01 12:35:50,847 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:50,849 [ERROR] Failed to generate report for bill 1907961: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:51,859 [INFO] Processing 1896/2605: Bill ID 1907826 2025-12-01 12:35:52,894 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:52,896 [ERROR] Failed to generate report for bill 1907826: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:53,906 [INFO] Processing 1897/2605: Bill ID 2023840 2025-12-01 12:35:55,864 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:55,866 [ERROR] Failed to generate report for bill 2023840: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:56,876 [INFO] Processing 1898/2605: Bill ID 1907778 2025-12-01 12:35:57,810 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:57,812 [ERROR] Failed to generate report for bill 1907778: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:35:58,822 [INFO] Skipping bill 1691917 - already processed (1899/2605) 2025-12-01 12:35:58,823 [INFO] Skipping bill 1695960 - already processed (1900/2605) 2025-12-01 12:35:58,823 [INFO] Skipping bill 1850601 - already processed (1901/2605) 2025-12-01 12:35:58,823 [INFO] Skipping bill 1838098 - already processed (1902/2605) 2025-12-01 12:35:58,823 [INFO] Skipping bill 1842521 - already processed (1903/2605) 2025-12-01 12:35:58,823 [INFO] Skipping bill 1809518 - already processed (1904/2605) 2025-12-01 12:35:58,823 [INFO] Skipping bill 1839623 - already processed (1905/2605) 2025-12-01 12:35:58,823 [INFO] Skipping bill 1836854 - already processed (1906/2605) 2025-12-01 12:35:58,824 [INFO] Skipping bill 1828203 - already processed (1907/2605) 2025-12-01 12:35:58,824 [INFO] Skipping bill 1823415 - already processed (1908/2605) 2025-12-01 12:35:58,824 [INFO] Processing 1909/2605: Bill ID 1809702 2025-12-01 12:35:59,753 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:35:59,755 [ERROR] Failed to generate report for bill 1809702: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:00,763 [INFO] Processing 1910/2605: Bill ID 1812739 2025-12-01 12:36:01,906 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:01,908 [ERROR] Failed to generate report for bill 1812739: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:01,960 [INFO] Saved 2598 reports to data/bill_reports.json 2025-12-01 12:36:01,960 [INFO] Progress: 1910/2605 - Processed: 2, Skipped: 1814, Errors: 94 2025-12-01 12:36:02,965 [INFO] Skipping bill 1993190 - already processed (1911/2605) 2025-12-01 12:36:02,967 [INFO] Skipping bill 2009723 - already processed (1912/2605) 2025-12-01 12:36:02,967 [INFO] Skipping bill 1970932 - already processed (1913/2605) 2025-12-01 12:36:02,967 [INFO] Skipping bill 1990795 - already processed (1914/2605) 2025-12-01 12:36:02,967 [INFO] Skipping bill 1966877 - already processed (1915/2605) 2025-12-01 12:36:02,967 [INFO] Skipping bill 1972008 - already processed (1916/2605) 2025-12-01 12:36:02,967 [INFO] Skipping bill 1994548 - already processed (1917/2605) 2025-12-01 12:36:02,967 [INFO] Skipping bill 1991745 - already processed (1918/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 2010818 - already processed (1919/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 2003316 - already processed (1920/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 2021830 - already processed (1921/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 2009667 - already processed (1922/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 2011559 - already processed (1923/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 1981081 - already processed (1924/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 1990559 - already processed (1925/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 1968858 - already processed (1926/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 1841344 - already processed (1927/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 1837111 - already processed (1928/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 1783445 - already processed (1929/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 1854251 - already processed (1930/2605) 2025-12-01 12:36:02,968 [INFO] Skipping bill 1867071 - already processed (1931/2605) 2025-12-01 12:36:02,969 [INFO] Skipping bill 1782940 - already processed (1932/2605) 2025-12-01 12:36:02,969 [INFO] Skipping bill 1780646 - already processed (1933/2605) 2025-12-01 12:36:02,969 [INFO] Skipping bill 1781005 - already processed (1934/2605) 2025-12-01 12:36:02,969 [INFO] Processing 1935/2605: Bill ID 1709614 2025-12-01 12:36:05,286 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:05,288 [ERROR] Failed to generate report for bill 1709614: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:06,298 [INFO] Processing 1936/2605: Bill ID 1709655 2025-12-01 12:36:08,647 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:08,648 [ERROR] Failed to generate report for bill 1709655: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:09,655 [INFO] Skipping bill 2034598 - already processed (1937/2605) 2025-12-01 12:36:09,655 [INFO] Skipping bill 2034722 - already processed (1938/2605) 2025-12-01 12:36:09,655 [INFO] Skipping bill 2038518 - already processed (1939/2605) 2025-12-01 12:36:09,655 [INFO] Skipping bill 2039752 - already processed (1940/2605) 2025-12-01 12:36:09,655 [INFO] Skipping bill 2044087 - already processed (1941/2605) 2025-12-01 12:36:09,656 [INFO] Skipping bill 2042614 - already processed (1942/2605) 2025-12-01 12:36:09,656 [INFO] Skipping bill 2045155 - already processed (1943/2605) 2025-12-01 12:36:09,656 [INFO] Skipping bill 2045662 - already processed (1944/2605) 2025-12-01 12:36:09,656 [INFO] Processing 1945/2605: Bill ID 1974122 2025-12-01 12:36:12,351 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:12,353 [ERROR] Failed to generate report for bill 1974122: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:13,364 [INFO] Processing 1946/2605: Bill ID 1974279 2025-12-01 12:36:16,244 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:16,247 [ERROR] Failed to generate report for bill 1974279: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:17,255 [INFO] Skipping bill 2047792 - already processed (1947/2605) 2025-12-01 12:36:17,256 [INFO] Skipping bill 1842729 - already processed (1948/2605) 2025-12-01 12:36:17,256 [INFO] Skipping bill 1842887 - already processed (1949/2605) 2025-12-01 12:36:17,256 [INFO] Skipping bill 1939111 - already processed (1950/2605) 2025-12-01 12:36:17,257 [INFO] Skipping bill 1895001 - already processed (1951/2605) 2025-12-01 12:36:17,257 [INFO] Skipping bill 1945993 - already processed (1952/2605) 2025-12-01 12:36:17,257 [INFO] Skipping bill 1945813 - already processed (1953/2605) 2025-12-01 12:36:17,257 [INFO] Skipping bill 1774433 - already processed (1954/2605) 2025-12-01 12:36:17,257 [INFO] Skipping bill 1884990 - already processed (1955/2605) 2025-12-01 12:36:17,257 [INFO] Skipping bill 1882572 - already processed (1956/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1784131 - already processed (1957/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1873726 - already processed (1958/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1882205 - already processed (1959/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1860116 - already processed (1960/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1835790 - already processed (1961/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1835624 - already processed (1962/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1876647 - already processed (1963/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1887447 - already processed (1964/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1898165 - already processed (1965/2605) 2025-12-01 12:36:17,258 [INFO] Skipping bill 1780760 - already processed (1966/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1887744 - already processed (1967/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1782128 - already processed (1968/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1887739 - already processed (1969/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1885322 - already processed (1970/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1887646 - already processed (1971/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1897119 - already processed (1972/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1782539 - already processed (1973/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1880117 - already processed (1974/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1810734 - already processed (1975/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1887671 - already processed (1976/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1883053 - already processed (1977/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1861062 - already processed (1978/2605) 2025-12-01 12:36:17,259 [INFO] Skipping bill 1775461 - already processed (1979/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1792331 - already processed (1980/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1765384 - already processed (1981/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1863023 - already processed (1982/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1883034 - already processed (1983/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1886748 - already processed (1984/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1886756 - already processed (1985/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1885278 - already processed (1986/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1784087 - already processed (1987/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1886439 - already processed (1988/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1877586 - already processed (1989/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1888775 - already processed (1990/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1773844 - already processed (1991/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1857956 - already processed (1992/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1775721 - already processed (1993/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1861016 - already processed (1994/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1884504 - already processed (1995/2605) 2025-12-01 12:36:17,260 [INFO] Skipping bill 1892975 - already processed (1996/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1886714 - already processed (1997/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1877214 - already processed (1998/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1779520 - already processed (1999/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1882161 - already processed (2000/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1793734 - already processed (2001/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1885501 - already processed (2002/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1887169 - already processed (2003/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1877680 - already processed (2004/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1887282 - already processed (2005/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1774766 - already processed (2006/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1774961 - already processed (2007/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1866654 - already processed (2008/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1779127 - already processed (2009/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1882224 - already processed (2010/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1892198 - already processed (2011/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1759862 - already processed (2012/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1888377 - already processed (2013/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1894701 - already processed (2014/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1864751 - already processed (2015/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1772453 - already processed (2016/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1885309 - already processed (2017/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1886447 - already processed (2018/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1848736 - already processed (2019/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1884301 - already processed (2020/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1881976 - already processed (2021/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1885426 - already processed (2022/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1775334 - already processed (2023/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1884442 - already processed (2024/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1881980 - already processed (2025/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1893238 - already processed (2026/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1865594 - already processed (2027/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1872732 - already processed (2028/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1885341 - already processed (2029/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1764018 - already processed (2030/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1887315 - already processed (2031/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1751404 - already processed (2032/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1888249 - already processed (2033/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1885249 - already processed (2034/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1881398 - already processed (2035/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1866637 - already processed (2036/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1770194 - already processed (2037/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1775580 - already processed (2038/2605) 2025-12-01 12:36:17,261 [INFO] Skipping bill 1784705 - already processed (2039/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1831382 - already processed (2040/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1885274 - already processed (2041/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1892393 - already processed (2042/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1877691 - already processed (2043/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1776083 - already processed (2044/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1760978 - already processed (2045/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1764682 - already processed (2046/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1880344 - already processed (2047/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1886698 - already processed (2048/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1876488 - already processed (2049/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1765330 - already processed (2050/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1887359 - already processed (2051/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1771744 - already processed (2052/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1831359 - already processed (2053/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1774102 - already processed (2054/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1774479 - already processed (2055/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1794846 - already processed (2056/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1894867 - already processed (2057/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1774859 - already processed (2058/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1884522 - already processed (2059/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1866979 - already processed (2060/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1886705 - already processed (2061/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1898170 - already processed (2062/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1885330 - already processed (2063/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1792286 - already processed (2064/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1892877 - already processed (2065/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1884177 - already processed (2066/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1774713 - already processed (2067/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1774626 - already processed (2068/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1884513 - already processed (2069/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1887362 - already processed (2070/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1893236 - already processed (2071/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1883668 - already processed (2072/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1831371 - already processed (2073/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1885671 - already processed (2074/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1885535 - already processed (2075/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1888766 - already processed (2076/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1892506 - already processed (2077/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1892532 - already processed (2078/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1878820 - already processed (2079/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1884926 - already processed (2080/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1895881 - already processed (2081/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1778284 - already processed (2082/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1770920 - already processed (2083/2605) 2025-12-01 12:36:17,262 [INFO] Skipping bill 1650801 - already processed (2084/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1883378 - already processed (2085/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1683970 - already processed (2086/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1772792 - already processed (2087/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1759623 - already processed (2088/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1760525 - already processed (2089/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1862531 - already processed (2090/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1767461 - already processed (2091/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1776485 - already processed (2092/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1871231 - already processed (2093/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1887711 - already processed (2094/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1893243 - already processed (2095/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1701254 - already processed (2096/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1897456 - already processed (2097/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1775615 - already processed (2098/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1794843 - already processed (2099/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1810720 - already processed (2100/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1894308 - already processed (2101/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1894683 - already processed (2102/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1842456 - already processed (2103/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1885281 - already processed (2104/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1759897 - already processed (2105/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1860079 - already processed (2106/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1746098 - already processed (2107/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1897489 - already processed (2108/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1887287 - already processed (2109/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1885252 - already processed (2110/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1892936 - already processed (2111/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1732925 - already processed (2112/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1746069 - already processed (2113/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1774408 - already processed (2114/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1772182 - already processed (2115/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1884422 - already processed (2116/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1687118 - already processed (2117/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1784726 - already processed (2118/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1762912 - already processed (2119/2605) 2025-12-01 12:36:17,263 [INFO] Skipping bill 1898405 - already processed (2120/2605) 2025-12-01 12:36:17,263 [INFO] Processing 2121/2605: Bill ID 1884189 2025-12-01 12:36:18,700 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:18,702 [ERROR] Failed to generate report for bill 1884189: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:19,713 [INFO] Skipping bill 1899847 - already processed (2122/2605) 2025-12-01 12:36:19,714 [INFO] Skipping bill 1732984 - already processed (2123/2605) 2025-12-01 12:36:19,714 [INFO] Skipping bill 1746089 - already processed (2124/2605) 2025-12-01 12:36:19,714 [INFO] Skipping bill 1766726 - already processed (2125/2605) 2025-12-01 12:36:19,714 [INFO] Skipping bill 1769804 - already processed (2126/2605) 2025-12-01 12:36:19,714 [INFO] Skipping bill 1897097 - already processed (2127/2605) 2025-12-01 12:36:19,715 [INFO] Processing 2128/2605: Bill ID 1774177 2025-12-01 12:36:21,128 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:21,137 [ERROR] Failed to generate report for bill 1774177: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:22,149 [INFO] Skipping bill 1757049 - already processed (2129/2605) 2025-12-01 12:36:22,149 [INFO] Skipping bill 1784298 - already processed (2130/2605) 2025-12-01 12:36:22,150 [INFO] Skipping bill 1785108 - already processed (2131/2605) 2025-12-01 12:36:22,150 [INFO] Skipping bill 1772128 - already processed (2132/2605) 2025-12-01 12:36:22,150 [INFO] Skipping bill 1879910 - already processed (2133/2605) 2025-12-01 12:36:22,150 [INFO] Skipping bill 1777717 - already processed (2134/2605) 2025-12-01 12:36:22,150 [INFO] Skipping bill 1843401 - already processed (2135/2605) 2025-12-01 12:36:22,150 [INFO] Skipping bill 1774203 - already processed (2136/2605) 2025-12-01 12:36:22,150 [INFO] Skipping bill 1892268 - already processed (2137/2605) 2025-12-01 12:36:22,150 [INFO] Skipping bill 1774216 - already processed (2138/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1868870 - already processed (2139/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1770792 - already processed (2140/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1894823 - already processed (2141/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1885629 - already processed (2142/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1866980 - already processed (2143/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1826236 - already processed (2144/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1860115 - already processed (2145/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1767424 - already processed (2146/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1877069 - already processed (2147/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1865576 - already processed (2148/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1771076 - already processed (2149/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1755580 - already processed (2150/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1885029 - already processed (2151/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1770955 - already processed (2152/2605) 2025-12-01 12:36:22,151 [INFO] Skipping bill 1772617 - already processed (2153/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1760193 - already processed (2154/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1871212 - already processed (2155/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1887934 - already processed (2156/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1879177 - already processed (2157/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1897536 - already processed (2158/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1854133 - already processed (2159/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1761508 - already processed (2160/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1777284 - already processed (2161/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1774079 - already processed (2162/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1896271 - already processed (2163/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1897312 - already processed (2164/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1774750 - already processed (2165/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1873661 - already processed (2166/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1782516 - already processed (2167/2605) 2025-12-01 12:36:22,152 [INFO] Skipping bill 1782446 - already processed (2168/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1866649 - already processed (2169/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1866664 - already processed (2170/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1707867 - already processed (2171/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1872167 - already processed (2172/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1759875 - already processed (2173/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1789214 - already processed (2174/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1872153 - already processed (2175/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1760229 - already processed (2176/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1774942 - already processed (2177/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1694059 - already processed (2178/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1829219 - already processed (2179/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1679271 - already processed (2180/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1883365 - already processed (2181/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1780777 - already processed (2182/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1707919 - already processed (2183/2605) 2025-12-01 12:36:22,153 [INFO] Skipping bill 1860113 - already processed (2184/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1781933 - already processed (2185/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1751388 - already processed (2186/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1754500 - already processed (2187/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1772123 - already processed (2188/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1892924 - already processed (2189/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1778422 - already processed (2190/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1897294 - already processed (2191/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1769557 - already processed (2192/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1747003 - already processed (2193/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1775420 - already processed (2194/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1885460 - already processed (2195/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1778494 - already processed (2196/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1778507 - already processed (2197/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1746072 - already processed (2198/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1747808 - already processed (2199/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1764055 - already processed (2200/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1765960 - already processed (2201/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1766587 - already processed (2202/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1766736 - already processed (2203/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1771518 - already processed (2204/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1772577 - already processed (2205/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1772933 - already processed (2206/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1773303 - already processed (2207/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1775354 - already processed (2208/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1777649 - already processed (2209/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1783786 - already processed (2210/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1783927 - already processed (2211/2605) 2025-12-01 12:36:22,154 [INFO] Skipping bill 1791735 - already processed (2212/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1791984 - already processed (2213/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1860914 - already processed (2214/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1874964 - already processed (2215/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1876702 - already processed (2216/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1878298 - already processed (2217/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1878970 - already processed (2218/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1878883 - already processed (2219/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1880262 - already processed (2220/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1880301 - already processed (2221/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1880312 - already processed (2222/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1882770 - already processed (2223/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1889897 - already processed (2224/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1892711 - already processed (2225/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1897258 - already processed (2226/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1881528 - already processed (2227/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1782893 - already processed (2228/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1834554 - already processed (2229/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1774082 - already processed (2230/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1783631 - already processed (2231/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1879351 - already processed (2232/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1707921 - already processed (2233/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1872751 - already processed (2234/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1848738 - already processed (2235/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1882577 - already processed (2236/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1880072 - already processed (2237/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1880345 - already processed (2238/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1892804 - already processed (2239/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1860940 - already processed (2240/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1766003 - already processed (2241/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1775441 - already processed (2242/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1758619 - already processed (2243/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1894461 - already processed (2244/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1778171 - already processed (2245/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1778004 - already processed (2246/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1832839 - already processed (2247/2605) 2025-12-01 12:36:22,155 [INFO] Skipping bill 1774844 - already processed (2248/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1751449 - already processed (2249/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1751346 - already processed (2250/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1759080 - already processed (2251/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1882756 - already processed (2252/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1882766 - already processed (2253/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1887196 - already processed (2254/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1889949 - already processed (2255/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1887718 - already processed (2256/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1896232 - already processed (2257/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1783562 - already processed (2258/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1681772 - already processed (2259/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1871711 - already processed (2260/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1874986 - already processed (2261/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1772204 - already processed (2262/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1884912 - already processed (2263/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1888175 - already processed (2264/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1832721 - already processed (2265/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1887649 - already processed (2266/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1887704 - already processed (2267/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1881672 - already processed (2268/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1777454 - already processed (2269/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1882397 - already processed (2270/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1766671 - already processed (2271/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1775036 - already processed (2272/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1694305 - already processed (2273/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1863407 - already processed (2274/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1746051 - already processed (2275/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1882537 - already processed (2276/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1873551 - already processed (2277/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1762960 - already processed (2278/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1887303 - already processed (2279/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1887118 - already processed (2280/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1775679 - already processed (2281/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1882373 - already processed (2282/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1862520 - already processed (2283/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1886817 - already processed (2284/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1750558 - already processed (2285/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1750336 - already processed (2286/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1694173 - already processed (2287/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1864746 - already processed (2288/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1887915 - already processed (2289/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1774093 - already processed (2290/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1650659 - already processed (2291/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1694050 - already processed (2292/2605) 2025-12-01 12:36:22,156 [INFO] Skipping bill 1771092 - already processed (2293/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1876599 - already processed (2294/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1835788 - already processed (2295/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1782691 - already processed (2296/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1876668 - already processed (2297/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1729737 - already processed (2298/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1766627 - already processed (2299/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1885388 - already processed (2300/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1887130 - already processed (2301/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1775597 - already processed (2302/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1793999 - already processed (2303/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1789198 - already processed (2304/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1888330 - already processed (2305/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1882746 - already processed (2306/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1694182 - already processed (2307/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1860920 - already processed (2308/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1774448 - already processed (2309/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1774405 - already processed (2310/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1876990 - already processed (2311/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1876679 - already processed (2312/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1881973 - already processed (2313/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1717622 - already processed (2314/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1885510 - already processed (2315/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1871269 - already processed (2316/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1774266 - already processed (2317/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1785924 - already processed (2318/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1779428 - already processed (2319/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1775195 - already processed (2320/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1775134 - already processed (2321/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1743524 - already processed (2322/2605) 2025-12-01 12:36:22,157 [INFO] Skipping bill 1757473 - already processed (2323/2605) 2025-12-01 12:36:22,157 [INFO] Processing 2324/2605: Bill ID 1857970 2025-12-01 12:36:22,963 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:22,965 [ERROR] Failed to generate report for bill 1857970: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:23,973 [INFO] Skipping bill 1883678 - already processed (2325/2605) 2025-12-01 12:36:23,974 [INFO] Processing 2326/2605: Bill ID 1897245 2025-12-01 12:36:25,562 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:25,564 [ERROR] Failed to generate report for bill 1897245: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:26,574 [INFO] Skipping bill 1894517 - already processed (2327/2605) 2025-12-01 12:36:26,574 [INFO] Processing 2328/2605: Bill ID 1898241 2025-12-01 12:36:27,607 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:27,615 [ERROR] Failed to generate report for bill 1898241: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:28,621 [INFO] Processing 2329/2605: Bill ID 1879854 2025-12-01 12:36:29,659 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:29,660 [ERROR] Failed to generate report for bill 1879854: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:30,670 [INFO] Skipping bill 1888278 - already processed (2330/2605) 2025-12-01 12:36:30,670 [INFO] Skipping bill 1879169 - already processed (2331/2605) 2025-12-01 12:36:30,670 [INFO] Skipping bill 1860989 - already processed (2332/2605) 2025-12-01 12:36:30,670 [INFO] Skipping bill 1758024 - already processed (2333/2605) 2025-12-01 12:36:30,670 [INFO] Skipping bill 1863932 - already processed (2334/2605) 2025-12-01 12:36:30,671 [INFO] Processing 2335/2605: Bill ID 1771174 2025-12-01 12:36:31,500 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:31,500 [ERROR] Failed to generate report for bill 1771174: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:32,507 [INFO] Skipping bill 1772600 - already processed (2336/2605) 2025-12-01 12:36:32,507 [INFO] Skipping bill 1760911 - already processed (2337/2605) 2025-12-01 12:36:32,507 [INFO] Skipping bill 1789291 - already processed (2338/2605) 2025-12-01 12:36:32,507 [INFO] Skipping bill 1764694 - already processed (2339/2605) 2025-12-01 12:36:32,507 [INFO] Skipping bill 1764770 - already processed (2340/2605) 2025-12-01 12:36:32,507 [INFO] Skipping bill 1884949 - already processed (2341/2605) 2025-12-01 12:36:32,507 [INFO] Processing 2342/2605: Bill ID 1897528 2025-12-01 12:36:32,970 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:32,971 [ERROR] Failed to generate report for bill 1897528: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:33,980 [INFO] Processing 2343/2605: Bill ID 1898192 2025-12-01 12:36:34,469 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:34,471 [ERROR] Failed to generate report for bill 1898192: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:35,479 [INFO] Skipping bill 1774988 - already processed (2344/2605) 2025-12-01 12:36:35,480 [INFO] Processing 2345/2605: Bill ID 1892419 2025-12-01 12:36:36,415 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 507 Insufficient Storage" 2025-12-01 12:36:36,416 [INFO] Retrying request to /chat/completions in 0.493545 seconds 2025-12-01 12:36:38,155 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:38,157 [ERROR] Failed to generate report for bill 1892419: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:39,165 [INFO] Processing 2346/2605: Bill ID 1884946 2025-12-01 12:36:40,818 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:40,821 [ERROR] Failed to generate report for bill 1884946: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:41,829 [INFO] Processing 2347/2605: Bill ID 1885067 2025-12-01 12:36:43,561 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:43,563 [ERROR] Failed to generate report for bill 1885067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:44,571 [INFO] Skipping bill 1879669 - already processed (2348/2605) 2025-12-01 12:36:44,571 [INFO] Processing 2349/2605: Bill ID 1897089 2025-12-01 12:36:45,323 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:45,324 [ERROR] Failed to generate report for bill 1897089: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:46,333 [INFO] Skipping bill 2041135 - already processed (2350/2605) 2025-12-01 12:36:46,334 [INFO] Skipping bill 2037217 - already processed (2351/2605) 2025-12-01 12:36:46,334 [INFO] Skipping bill 2022578 - already processed (2352/2605) 2025-12-01 12:36:46,334 [INFO] Skipping bill 2045360 - already processed (2353/2605) 2025-12-01 12:36:46,334 [INFO] Skipping bill 2044380 - already processed (2354/2605) 2025-12-01 12:36:46,334 [INFO] Skipping bill 1987991 - already processed (2355/2605) 2025-12-01 12:36:46,335 [INFO] Skipping bill 2040591 - already processed (2356/2605) 2025-12-01 12:36:46,335 [INFO] Skipping bill 2044133 - already processed (2357/2605) 2025-12-01 12:36:46,335 [INFO] Skipping bill 2040128 - already processed (2358/2605) 2025-12-01 12:36:46,335 [INFO] Skipping bill 2022459 - already processed (2359/2605) 2025-12-01 12:36:46,335 [INFO] Skipping bill 2046890 - already processed (2360/2605) 2025-12-01 12:36:46,335 [INFO] Skipping bill 1948171 - already processed (2361/2605) 2025-12-01 12:36:46,335 [INFO] Skipping bill 2047758 - already processed (2362/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2029224 - already processed (2363/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2044676 - already processed (2364/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2041169 - already processed (2365/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2043072 - already processed (2366/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2015628 - already processed (2367/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2029917 - already processed (2368/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2029601 - already processed (2369/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 1988067 - already processed (2370/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 1964814 - already processed (2371/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2043727 - already processed (2372/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 1988016 - already processed (2373/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2037684 - already processed (2374/2605) 2025-12-01 12:36:46,336 [INFO] Skipping bill 2029576 - already processed (2375/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2008640 - already processed (2376/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2042761 - already processed (2377/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2043628 - already processed (2378/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2039925 - already processed (2379/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 1990438 - already processed (2380/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2014950 - already processed (2381/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2046871 - already processed (2382/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2008541 - already processed (2383/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2019807 - already processed (2384/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2032195 - already processed (2385/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2032174 - already processed (2386/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2053144 - already processed (2387/2605) 2025-12-01 12:36:46,337 [INFO] Skipping bill 2045181 - already processed (2388/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2035367 - already processed (2389/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2022504 - already processed (2390/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2051717 - already processed (2391/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2040216 - already processed (2392/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2038243 - already processed (2393/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2038240 - already processed (2394/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 1958579 - already processed (2395/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2041151 - already processed (2396/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2040068 - already processed (2397/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2051901 - already processed (2398/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2035878 - already processed (2399/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2043698 - already processed (2400/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2043764 - already processed (2401/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2047702 - already processed (2402/2605) 2025-12-01 12:36:46,338 [INFO] Skipping bill 2034541 - already processed (2403/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2036108 - already processed (2404/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2052002 - already processed (2405/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2036914 - already processed (2406/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2032053 - already processed (2407/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2032068 - already processed (2408/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2045357 - already processed (2409/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2043047 - already processed (2410/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2040306 - already processed (2411/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 1916986 - already processed (2412/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2039821 - already processed (2413/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2047752 - already processed (2414/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2046891 - already processed (2415/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2040880 - already processed (2416/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2040851 - already processed (2417/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2043722 - already processed (2418/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 1987950 - already processed (2419/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 2040439 - already processed (2420/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 1901865 - already processed (2421/2605) 2025-12-01 12:36:46,339 [INFO] Skipping bill 1905283 - already processed (2422/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2042107 - already processed (2423/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 1986270 - already processed (2424/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2044713 - already processed (2425/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2041468 - already processed (2426/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 1983900 - already processed (2427/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2020217 - already processed (2428/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2038216 - already processed (2429/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2043604 - already processed (2430/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2045365 - already processed (2431/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2043961 - already processed (2432/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2044138 - already processed (2433/2605) 2025-12-01 12:36:46,340 [INFO] Skipping bill 2040354 - already processed (2434/2605) 2025-12-01 12:36:46,340 [INFO] Processing 2435/2605: Bill ID 2053157 2025-12-01 12:36:56,279 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:36:56,283 [INFO] Skipping bill 1984221 - already processed (2436/2605) 2025-12-01 12:36:56,283 [INFO] Skipping bill 2033224 - already processed (2437/2605) 2025-12-01 12:36:56,283 [INFO] Skipping bill 2033186 - already processed (2438/2605) 2025-12-01 12:36:56,283 [INFO] Skipping bill 1970505 - already processed (2439/2605) 2025-12-01 12:36:56,283 [INFO] Skipping bill 2036132 - already processed (2440/2605) 2025-12-01 12:36:56,283 [INFO] Skipping bill 2033542 - already processed (2441/2605) 2025-12-01 12:36:56,283 [INFO] Skipping bill 2027361 - already processed (2442/2605) 2025-12-01 12:36:56,283 [INFO] Skipping bill 2040866 - already processed (2443/2605) 2025-12-01 12:36:56,283 [INFO] Skipping bill 2043357 - already processed (2444/2605) 2025-12-01 12:36:56,284 [INFO] Skipping bill 2041757 - already processed (2445/2605) 2025-12-01 12:36:56,284 [INFO] Skipping bill 2042653 - already processed (2446/2605) 2025-12-01 12:36:56,284 [INFO] Skipping bill 2043161 - already processed (2447/2605) 2025-12-01 12:36:56,284 [INFO] Skipping bill 2052989 - already processed (2448/2605) 2025-12-01 12:36:56,284 [INFO] Skipping bill 1965963 - already processed (2449/2605) 2025-12-01 12:36:56,284 [INFO] Skipping bill 2045735 - already processed (2450/2605) 2025-12-01 12:36:56,284 [INFO] Skipping bill 1999388 - already processed (2451/2605) 2025-12-01 12:36:56,284 [INFO] Skipping bill 2051352 - already processed (2452/2605) 2025-12-01 12:36:56,284 [INFO] Processing 2453/2605: Bill ID 2039530 2025-12-01 12:36:57,820 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:36:57,822 [ERROR] Failed to generate report for bill 2039530: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:36:58,829 [INFO] Skipping bill 2051886 - already processed (2454/2605) 2025-12-01 12:36:58,829 [INFO] Processing 2455/2605: Bill ID 2043562 2025-12-01 12:37:11,334 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:37:11,338 [INFO] Skipping bill 1970493 - already processed (2456/2605) 2025-12-01 12:37:11,338 [INFO] Skipping bill 2037978 - already processed (2457/2605) 2025-12-01 12:37:11,338 [INFO] Skipping bill 2040318 - already processed (2458/2605) 2025-12-01 12:37:11,338 [INFO] Skipping bill 2041104 - already processed (2459/2605) 2025-12-01 12:37:11,338 [INFO] Skipping bill 2043947 - already processed (2460/2605) 2025-12-01 12:37:11,338 [INFO] Skipping bill 2038111 - already processed (2461/2605) 2025-12-01 12:37:11,338 [INFO] Skipping bill 1982722 - already processed (2462/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2043896 - already processed (2463/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2012870 - already processed (2464/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2007066 - already processed (2465/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 1968860 - already processed (2466/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2029307 - already processed (2467/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2041255 - already processed (2468/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2033191 - already processed (2469/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2043715 - already processed (2470/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2036439 - already processed (2471/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 1968282 - already processed (2472/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2039688 - already processed (2473/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2038212 - already processed (2474/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 1987966 - already processed (2475/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2031847 - already processed (2476/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 1970497 - already processed (2477/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 1963353 - already processed (2478/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2046183 - already processed (2479/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2005587 - already processed (2480/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2039178 - already processed (2481/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2041269 - already processed (2482/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 2043688 - already processed (2483/2605) 2025-12-01 12:37:11,339 [INFO] Skipping bill 1927158 - already processed (2484/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 1987972 - already processed (2485/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2035895 - already processed (2486/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2037256 - already processed (2487/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2043043 - already processed (2488/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2031888 - already processed (2489/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2043344 - already processed (2490/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2043890 - already processed (2491/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 1936780 - already processed (2492/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2023141 - already processed (2493/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2022467 - already processed (2494/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2022582 - already processed (2495/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 1970488 - already processed (2496/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 1988006 - already processed (2497/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 1933954 - already processed (2498/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 1955921 - already processed (2499/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 1963338 - already processed (2500/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2015697 - already processed (2501/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2020008 - already processed (2502/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2021940 - already processed (2503/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2022593 - already processed (2504/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2026569 - already processed (2505/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2027464 - already processed (2506/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2018800 - already processed (2507/2605) 2025-12-01 12:37:11,340 [INFO] Skipping bill 2028784 - already processed (2508/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2029580 - already processed (2509/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2031938 - already processed (2510/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2032128 - already processed (2511/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 1947775 - already processed (2512/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2035420 - already processed (2513/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2037229 - already processed (2514/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2039570 - already processed (2515/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2042103 - already processed (2516/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2043758 - already processed (2517/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2046719 - already processed (2518/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2052024 - already processed (2519/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 2052050 - already processed (2520/2605) 2025-12-01 12:37:11,341 [INFO] Skipping bill 1979616 - already processed (2521/2605) 2025-12-01 12:37:11,341 [INFO] Processing 2522/2605: Bill ID 2053486 2025-12-01 12:37:22,466 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:37:22,476 [INFO] Skipping bill 2019782 - already processed (2523/2605) 2025-12-01 12:37:22,477 [INFO] Skipping bill 2017847 - already processed (2524/2605) 2025-12-01 12:37:22,477 [INFO] Skipping bill 2018869 - already processed (2525/2605) 2025-12-01 12:37:22,477 [INFO] Skipping bill 2040352 - already processed (2526/2605) 2025-12-01 12:37:22,478 [INFO] Skipping bill 2029980 - already processed (2527/2605) 2025-12-01 12:37:22,479 [INFO] Skipping bill 2018578 - already processed (2528/2605) 2025-12-01 12:37:22,479 [INFO] Skipping bill 2043696 - already processed (2529/2605) 2025-12-01 12:37:22,480 [INFO] Skipping bill 2008600 - already processed (2530/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2037247 - already processed (2531/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2037249 - already processed (2532/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2035609 - already processed (2533/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2038921 - already processed (2534/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2053374 - already processed (2535/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2021715 - already processed (2536/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2021641 - already processed (2537/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 1901818 - already processed (2538/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2023062 - already processed (2539/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2044841 - already processed (2540/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2043173 - already processed (2541/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 1948187 - already processed (2542/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2038257 - already processed (2543/2605) 2025-12-01 12:37:22,481 [INFO] Skipping bill 2053381 - already processed (2544/2605) 2025-12-01 12:37:22,481 [INFO] Processing 2545/2605: Bill ID 2053499 2025-12-01 12:38:11,039 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:38:11,043 [INFO] Processing 2546/2605: Bill ID 2053841 2025-12-01 12:38:20,149 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:38:20,153 [INFO] Processing 2547/2605: Bill ID 2054336 2025-12-01 12:38:28,054 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:38:28,061 [INFO] Processing 2548/2605: Bill ID 2054344 2025-12-01 12:38:38,274 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-01 12:38:38,280 [INFO] Skipping bill 2037277 - already processed (2549/2605) 2025-12-01 12:38:38,280 [INFO] Skipping bill 1941772 - already processed (2550/2605) 2025-12-01 12:38:38,280 [INFO] Skipping bill 2043199 - already processed (2551/2605) 2025-12-01 12:38:38,280 [INFO] Skipping bill 2041162 - already processed (2552/2605) 2025-12-01 12:38:38,280 [INFO] Skipping bill 2038970 - already processed (2553/2605) 2025-12-01 12:38:38,280 [INFO] Skipping bill 2039918 - already processed (2554/2605) 2025-12-01 12:38:38,280 [INFO] Skipping bill 2032140 - already processed (2555/2605) 2025-12-01 12:38:38,280 [INFO] Skipping bill 2029941 - already processed (2556/2605) 2025-12-01 12:38:38,280 [INFO] Skipping bill 2038420 - already processed (2557/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 1943770 - already processed (2558/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 1979653 - already processed (2559/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 1970677 - already processed (2560/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 1988332 - already processed (2561/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 1939613 - already processed (2562/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 2043104 - already processed (2563/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 2000425 - already processed (2564/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 2028805 - already processed (2565/2605) 2025-12-01 12:38:38,281 [INFO] Skipping bill 2023111 - already processed (2566/2605) 2025-12-01 12:38:38,281 [INFO] Processing 2567/2605: Bill ID 2032901 2025-12-01 12:38:39,398 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:38:39,400 [ERROR] Failed to generate report for bill 2032901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:38:40,410 [INFO] Skipping bill 2051603 - already processed (2568/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 2036437 - already processed (2569/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 2036475 - already processed (2570/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 2032059 - already processed (2571/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 2007053 - already processed (2572/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 2000456 - already processed (2573/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 1958611 - already processed (2574/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 2016811 - already processed (2575/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 1926891 - already processed (2576/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 1943799 - already processed (2577/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 2039061 - already processed (2578/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 1961580 - already processed (2579/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 1927000 - already processed (2580/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 2023233 - already processed (2581/2605) 2025-12-01 12:38:40,411 [INFO] Skipping bill 1947802 - already processed (2582/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2022615 - already processed (2583/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2022439 - already processed (2584/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2033390 - already processed (2585/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2026636 - already processed (2586/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2047438 - already processed (2587/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2036925 - already processed (2588/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 1963365 - already processed (2589/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2043448 - already processed (2590/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 1994349 - already processed (2591/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2023224 - already processed (2592/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2028140 - already processed (2593/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2032003 - already processed (2594/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2039157 - already processed (2595/2605) 2025-12-01 12:38:40,412 [INFO] Skipping bill 2044179 - already processed (2596/2605) 2025-12-01 12:38:40,413 [INFO] Skipping bill 2035673 - already processed (2597/2605) 2025-12-01 12:38:40,413 [INFO] Skipping bill 2044473 - already processed (2598/2605) 2025-12-01 12:38:40,413 [INFO] Processing 2599/2605: Bill ID 1990400 2025-12-01 12:38:41,172 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:38:41,174 [ERROR] Failed to generate report for bill 1990400: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:38:42,183 [INFO] Skipping bill 2027724 - already processed (2600/2605) 2025-12-01 12:38:42,183 [INFO] Processing 2601/2605: Bill ID 2028171 2025-12-01 12:38:42,613 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:38:42,615 [ERROR] Failed to generate report for bill 2028171: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134543 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134543 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:38:43,624 [INFO] Processing 2602/2605: Bill ID 1966444 2025-12-01 12:38:44,213 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:38:44,214 [ERROR] Failed to generate report for bill 1966444: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:38:45,224 [INFO] Processing 2603/2605: Bill ID 2038906 2025-12-01 12:38:45,747 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:38:45,749 [ERROR] Failed to generate report for bill 2038906: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:38:46,760 [INFO] Processing 2604/2605: Bill ID 1994544 2025-12-01 12:38:47,386 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 12:38:47,387 [ERROR] Failed to generate report for bill 1994544: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 12:38:48,396 [INFO] Skipping bill 2041289 - already processed (2605/2605) 2025-12-01 12:38:48,440 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 12:38:48,440 [INFO] Report generation complete! 2025-12-01 12:38:48,440 [INFO] Total bills: 2605 2025-12-01 12:38:48,440 [INFO] Successfully processed: 9 2025-12-01 12:38:48,440 [INFO] Skipped (already done): 2478 2025-12-01 12:38:48,440 [INFO] Errors: 118 2025-12-01 13:12:10,853 [INFO] Loaded 2605 existing reports from data/bill_reports.json 2025-12-01 13:12:10,856 [INFO] Starting report generation for 2605 bills 2025-12-01 13:12:10,856 [INFO] Skipping bill 1769530 - already processed (1/2605) 2025-12-01 13:12:10,856 [INFO] Skipping bill 1765118 - already processed (2/2605) 2025-12-01 13:12:10,856 [INFO] Skipping bill 1745017 - already processed (3/2605) 2025-12-01 13:12:10,856 [INFO] Skipping bill 1745230 - already processed (4/2605) 2025-12-01 13:12:10,856 [INFO] Skipping bill 1847915 - already processed (5/2605) 2025-12-01 13:12:10,856 [INFO] Skipping bill 1847210 - already processed (6/2605) 2025-12-01 13:12:10,856 [INFO] Skipping bill 1847980 - already processed (7/2605) 2025-12-01 13:12:10,856 [INFO] Skipping bill 1840627 - already processed (8/2605) 2025-12-01 13:12:10,856 [INFO] Skipping bill 1840340 - already processed (9/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 2019785 - already processed (10/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1983607 - already processed (11/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 2019702 - already processed (12/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1987220 - already processed (13/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 2022389 - already processed (14/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1959465 - already processed (15/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 2023982 - already processed (16/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 2019732 - already processed (17/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1969654 - already processed (18/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1956622 - already processed (19/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1957166 - already processed (20/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1869518 - already processed (21/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1813560 - already processed (22/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1836190 - already processed (23/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1851112 - already processed (24/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1745943 - already processed (25/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1737840 - already processed (26/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1814309 - already processed (27/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1851143 - already processed (28/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1984991 - already processed (29/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1912439 - already processed (30/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1912476 - already processed (31/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1940708 - already processed (32/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1935103 - already processed (33/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1685926 - already processed (34/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1657717 - already processed (35/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1683096 - already processed (36/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1828964 - already processed (37/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1830782 - already processed (38/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1829010 - already processed (39/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1810349 - already processed (40/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1810356 - already processed (41/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1804209 - already processed (42/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1830673 - already processed (43/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1923768 - already processed (44/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1935042 - already processed (45/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1948089 - already processed (46/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1917064 - already processed (47/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1964274 - already processed (48/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1949161 - already processed (49/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1938396 - already processed (50/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1955446 - already processed (51/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1946736 - already processed (52/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 2037727 - already processed (53/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1730253 - already processed (54/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1721706 - already processed (55/2605) 2025-12-01 13:12:10,857 [INFO] Skipping bill 1975090 - already processed (56/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 1946146 - already processed (57/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 2018186 - already processed (58/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 2011735 - already processed (59/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 1897622 - already processed (60/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 1973543 - already processed (61/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 2009462 - already processed (62/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 2011658 - already processed (63/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 1944017 - already processed (64/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 1892641 - already processed (65/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 2010078 - already processed (66/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 1915632 - already processed (67/2605) 2025-12-01 13:12:10,858 [INFO] Skipping bill 1996393 - already processed (68/2605) 2025-12-01 13:12:10,858 [INFO] Processing 69/2605: Bill ID 1972479 2025-12-01 13:12:13,327 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:13,331 [ERROR] Failed to generate report for bill 1972479: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:14,349 [INFO] Skipping bill 1848589 - already processed (70/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1796695 - already processed (71/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1834299 - already processed (72/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1840453 - already processed (73/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1847401 - already processed (74/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1849339 - already processed (75/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1845122 - already processed (76/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1796692 - already processed (77/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1846289 - already processed (78/2605) 2025-12-01 13:12:14,350 [INFO] Skipping bill 1813231 - already processed (79/2605) 2025-12-01 13:12:14,351 [INFO] Skipping bill 1848433 - already processed (80/2605) 2025-12-01 13:12:14,351 [INFO] Skipping bill 1796691 - already processed (81/2605) 2025-12-01 13:12:14,354 [INFO] Skipping bill 1848536 - already processed (82/2605) 2025-12-01 13:12:14,354 [INFO] Skipping bill 1819737 - already processed (83/2605) 2025-12-01 13:12:14,354 [INFO] Skipping bill 1829037 - already processed (84/2605) 2025-12-01 13:12:14,354 [INFO] Skipping bill 1712200 - already processed (85/2605) 2025-12-01 13:12:14,354 [INFO] Skipping bill 1848424 - already processed (86/2605) 2025-12-01 13:12:14,354 [INFO] Skipping bill 1814918 - already processed (87/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1686429 - already processed (88/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1848359 - already processed (89/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1697069 - already processed (90/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1848453 - already processed (91/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1849513 - already processed (92/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1848521 - already processed (93/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1848425 - already processed (94/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1702816 - already processed (95/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1849367 - already processed (96/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1849520 - already processed (97/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1848530 - already processed (98/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1712027 - already processed (99/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1849659 - already processed (100/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1848478 - already processed (101/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1848387 - already processed (102/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1845137 - already processed (103/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1812205 - already processed (104/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1798416 - already processed (105/2605) 2025-12-01 13:12:14,355 [INFO] Skipping bill 1847351 - already processed (106/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1693943 - already processed (107/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1686454 - already processed (108/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1847404 - already processed (109/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1683775 - already processed (110/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1835452 - already processed (111/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1709727 - already processed (112/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1849724 - already processed (113/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1761500 - already processed (114/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1697048 - already processed (115/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1860070 - already processed (116/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1771300 - already processed (117/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1709708 - already processed (118/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1848529 - already processed (119/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1845179 - already processed (120/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1849404 - already processed (121/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1714444 - already processed (122/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1824468 - already processed (123/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1882346 - already processed (124/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1885654 - already processed (125/2605) 2025-12-01 13:12:14,356 [INFO] Skipping bill 1849359 - already processed (126/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1840414 - already processed (127/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1846229 - already processed (128/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1707510 - already processed (129/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1845188 - already processed (130/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1848524 - already processed (131/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1847496 - already processed (132/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1883008 - already processed (133/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1649620 - already processed (134/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1667841 - already processed (135/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1848476 - already processed (136/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1649670 - already processed (137/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1667891 - already processed (138/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1649612 - already processed (139/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1649615 - already processed (140/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1667833 - already processed (141/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1667836 - already processed (142/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1649618 - already processed (143/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1667839 - already processed (144/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1649630 - already processed (145/2605) 2025-12-01 13:12:14,357 [INFO] Skipping bill 1649619 - already processed (146/2605) 2025-12-01 13:12:14,358 [INFO] Skipping bill 1667851 - already processed (147/2605) 2025-12-01 13:12:14,358 [INFO] Skipping bill 1667840 - already processed (148/2605) 2025-12-01 13:12:14,358 [INFO] Processing 149/2605: Bill ID 1865211 2025-12-01 13:12:15,196 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:15,198 [ERROR] Failed to generate report for bill 1865211: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:16,206 [INFO] Skipping bill 1667837 - already processed (150/2605) 2025-12-01 13:12:16,207 [INFO] Skipping bill 1667892 - already processed (151/2605) 2025-12-01 13:12:16,207 [INFO] Skipping bill 1649616 - already processed (152/2605) 2025-12-01 13:12:16,208 [INFO] Skipping bill 1649671 - already processed (153/2605) 2025-12-01 13:12:16,208 [INFO] Processing 154/2605: Bill ID 1726105 2025-12-01 13:12:17,274 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:17,276 [ERROR] Failed to generate report for bill 1726105: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:18,288 [INFO] Skipping bill 1978757 - already processed (155/2605) 2025-12-01 13:12:18,290 [INFO] Skipping bill 1980543 - already processed (156/2605) 2025-12-01 13:12:18,290 [INFO] Skipping bill 1893423 - already processed (157/2605) 2025-12-01 13:12:18,290 [INFO] Skipping bill 1964699 - already processed (158/2605) 2025-12-01 13:12:18,290 [INFO] Skipping bill 1978599 - already processed (159/2605) 2025-12-01 13:12:18,290 [INFO] Skipping bill 1980563 - already processed (160/2605) 2025-12-01 13:12:18,290 [INFO] Skipping bill 1976585 - already processed (161/2605) 2025-12-01 13:12:18,291 [INFO] Skipping bill 1904800 - already processed (162/2605) 2025-12-01 13:12:18,291 [INFO] Skipping bill 1974530 - already processed (163/2605) 2025-12-01 13:12:18,291 [INFO] Skipping bill 1964676 - already processed (164/2605) 2025-12-01 13:12:18,291 [INFO] Skipping bill 1955758 - already processed (165/2605) 2025-12-01 13:12:18,291 [INFO] Skipping bill 1941749 - already processed (166/2605) 2025-12-01 13:12:18,291 [INFO] Skipping bill 1976440 - already processed (167/2605) 2025-12-01 13:12:18,291 [INFO] Skipping bill 1978812 - already processed (168/2605) 2025-12-01 13:12:18,291 [INFO] Skipping bill 1978731 - already processed (169/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1949687 - already processed (170/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1980302 - already processed (171/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 2032041 - already processed (172/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1978672 - already processed (173/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1955756 - already processed (174/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1970455 - already processed (175/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1978694 - already processed (176/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1976550 - already processed (177/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1908207 - already processed (178/2605) 2025-12-01 13:12:18,292 [INFO] Skipping bill 1971712 - already processed (179/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1919273 - already processed (180/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1893452 - already processed (181/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1971760 - already processed (182/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1978553 - already processed (183/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1980501 - already processed (184/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1980139 - already processed (185/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1908210 - already processed (186/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1980228 - already processed (187/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1947445 - already processed (188/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1971753 - already processed (189/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1943407 - already processed (190/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1896630 - already processed (191/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1953097 - already processed (192/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1961095 - already processed (193/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1953091 - already processed (194/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1953081 - already processed (195/2605) 2025-12-01 13:12:18,293 [INFO] Skipping bill 1978871 - already processed (196/2605) 2025-12-01 13:12:18,294 [INFO] Skipping bill 1990396 - already processed (197/2605) 2025-12-01 13:12:18,294 [INFO] Processing 198/2605: Bill ID 1980067 2025-12-01 13:12:19,262 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:19,265 [ERROR] Failed to generate report for bill 1980067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:20,279 [INFO] Skipping bill 1970450 - already processed (199/2605) 2025-12-01 13:12:20,280 [INFO] Skipping bill 1904793 - already processed (200/2605) 2025-12-01 13:12:20,280 [INFO] Skipping bill 1964689 - already processed (201/2605) 2025-12-01 13:12:20,280 [INFO] Skipping bill 1933300 - already processed (202/2605) 2025-12-01 13:12:20,280 [INFO] Skipping bill 2036404 - already processed (203/2605) 2025-12-01 13:12:20,280 [INFO] Skipping bill 1949685 - already processed (204/2605) 2025-12-01 13:12:20,281 [INFO] Skipping bill 1976474 - already processed (205/2605) 2025-12-01 13:12:20,281 [INFO] Skipping bill 1898373 - already processed (206/2605) 2025-12-01 13:12:20,281 [INFO] Skipping bill 2042443 - already processed (207/2605) 2025-12-01 13:12:20,281 [INFO] Skipping bill 2005483 - already processed (208/2605) 2025-12-01 13:12:20,281 [INFO] Skipping bill 1968261 - already processed (209/2605) 2025-12-01 13:12:20,281 [INFO] Skipping bill 1980234 - already processed (210/2605) 2025-12-01 13:12:20,281 [INFO] Skipping bill 1978559 - already processed (211/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1974545 - already processed (212/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1908089 - already processed (213/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1939198 - already processed (214/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1939199 - already processed (215/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1908087 - already processed (216/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1908088 - already processed (217/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1939200 - already processed (218/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1939201 - already processed (219/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1908090 - already processed (220/2605) 2025-12-01 13:12:20,282 [INFO] Skipping bill 1939197 - already processed (221/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1908086 - already processed (222/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1651326 - already processed (223/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1747628 - already processed (224/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1871619 - already processed (225/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1874953 - already processed (226/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1831016 - already processed (227/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1846007 - already processed (228/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 2026977 - already processed (229/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 2042502 - already processed (230/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 2042537 - already processed (231/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 2042540 - already processed (232/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1907590 - already processed (233/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 1907863 - already processed (234/2605) 2025-12-01 13:12:20,283 [INFO] Skipping bill 2022323 - already processed (235/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 1947638 - already processed (236/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 1965815 - already processed (237/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 2042471 - already processed (238/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 2017117 - already processed (239/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 1973900 - already processed (240/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 2020829 - already processed (241/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 1718823 - already processed (242/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 1709526 - already processed (243/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 1709356 - already processed (244/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 1839016 - already processed (245/2605) 2025-12-01 13:12:20,284 [INFO] Skipping bill 1859941 - already processed (246/2605) 2025-12-01 13:12:20,285 [INFO] Skipping bill 1839023 - already processed (247/2605) 2025-12-01 13:12:20,285 [INFO] Skipping bill 1860727 - already processed (248/2605) 2025-12-01 13:12:20,285 [INFO] Processing 249/2605: Bill ID 1876979 2025-12-01 13:12:20,901 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:20,904 [ERROR] Failed to generate report for bill 1876979: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:21,914 [INFO] Skipping bill 1905069 - already processed (250/2605) 2025-12-01 13:12:21,915 [INFO] Skipping bill 1992824 - already processed (251/2605) 2025-12-01 13:12:21,915 [INFO] Skipping bill 1957876 - already processed (252/2605) 2025-12-01 13:12:21,915 [INFO] Skipping bill 1965500 - already processed (253/2605) 2025-12-01 13:12:21,915 [INFO] Skipping bill 1990151 - already processed (254/2605) 2025-12-01 13:12:21,915 [INFO] Skipping bill 1949174 - already processed (255/2605) 2025-12-01 13:12:21,916 [INFO] Skipping bill 1905038 - already processed (256/2605) 2025-12-01 13:12:21,916 [INFO] Skipping bill 1905159 - already processed (257/2605) 2025-12-01 13:12:21,916 [INFO] Skipping bill 1907650 - already processed (258/2605) 2025-12-01 13:12:21,916 [INFO] Skipping bill 1909616 - already processed (259/2605) 2025-12-01 13:12:21,916 [INFO] Skipping bill 1909665 - already processed (260/2605) 2025-12-01 13:12:21,916 [INFO] Skipping bill 1928585 - already processed (261/2605) 2025-12-01 13:12:21,916 [INFO] Skipping bill 1928759 - already processed (262/2605) 2025-12-01 13:12:21,917 [INFO] Skipping bill 1928904 - already processed (263/2605) 2025-12-01 13:12:21,917 [INFO] Skipping bill 1931737 - already processed (264/2605) 2025-12-01 13:12:21,917 [INFO] Skipping bill 1928076 - already processed (265/2605) 2025-12-01 13:12:21,917 [INFO] Skipping bill 1935956 - already processed (266/2605) 2025-12-01 13:12:21,917 [INFO] Skipping bill 1905222 - already processed (267/2605) 2025-12-01 13:12:21,917 [INFO] Skipping bill 1932777 - already processed (268/2605) 2025-12-01 13:12:21,917 [INFO] Skipping bill 1905141 - already processed (269/2605) 2025-12-01 13:12:21,918 [INFO] Processing 270/2605: Bill ID 2034928 2025-12-01 13:12:23,156 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:23,158 [ERROR] Failed to generate report for bill 2034928: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:23,218 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:12:23,218 [INFO] Progress: 270/2605 - Processed: 0, Skipped: 264, Errors: 6 2025-12-01 13:12:24,223 [INFO] Skipping bill 1820947 - already processed (271/2605) 2025-12-01 13:12:24,224 [INFO] Skipping bill 2038143 - already processed (272/2605) 2025-12-01 13:12:24,224 [INFO] Skipping bill 1946119 - already processed (273/2605) 2025-12-01 13:12:24,224 [INFO] Skipping bill 2038726 - already processed (274/2605) 2025-12-01 13:12:24,224 [INFO] Skipping bill 2015494 - already processed (275/2605) 2025-12-01 13:12:24,224 [INFO] Skipping bill 1754732 - already processed (276/2605) 2025-12-01 13:12:24,224 [INFO] Skipping bill 1716623 - already processed (277/2605) 2025-12-01 13:12:24,224 [INFO] Skipping bill 1723029 - already processed (278/2605) 2025-12-01 13:12:24,225 [INFO] Skipping bill 1749221 - already processed (279/2605) 2025-12-01 13:12:24,225 [INFO] Skipping bill 1756757 - already processed (280/2605) 2025-12-01 13:12:24,225 [INFO] Skipping bill 1722774 - already processed (281/2605) 2025-12-01 13:12:24,225 [INFO] Processing 282/2605: Bill ID 1746175 2025-12-01 13:12:25,510 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:25,512 [ERROR] Failed to generate report for bill 1746175: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:26,520 [INFO] Skipping bill 1749049 - already processed (283/2605) 2025-12-01 13:12:26,521 [INFO] Skipping bill 1799517 - already processed (284/2605) 2025-12-01 13:12:26,521 [INFO] Skipping bill 1799058 - already processed (285/2605) 2025-12-01 13:12:26,521 [INFO] Skipping bill 1792427 - already processed (286/2605) 2025-12-01 13:12:26,521 [INFO] Skipping bill 1791537 - already processed (287/2605) 2025-12-01 13:12:26,521 [INFO] Skipping bill 1793699 - already processed (288/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1784035 - already processed (289/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1789608 - already processed (290/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1797287 - already processed (291/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1799146 - already processed (292/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1799256 - already processed (293/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1799530 - already processed (294/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1799073 - already processed (295/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1798525 - already processed (296/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1812862 - already processed (297/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1799556 - already processed (298/2605) 2025-12-01 13:12:26,522 [INFO] Skipping bill 1793796 - already processed (299/2605) 2025-12-01 13:12:26,523 [INFO] Skipping bill 1840899 - already processed (300/2605) 2025-12-01 13:12:26,523 [INFO] Skipping bill 1849855 - already processed (301/2605) 2025-12-01 13:12:26,523 [INFO] Skipping bill 1796581 - already processed (302/2605) 2025-12-01 13:12:26,523 [INFO] Skipping bill 1785974 - already processed (303/2605) 2025-12-01 13:12:26,523 [INFO] Skipping bill 1799599 - already processed (304/2605) 2025-12-01 13:12:26,523 [INFO] Skipping bill 1799188 - already processed (305/2605) 2025-12-01 13:12:26,523 [INFO] Skipping bill 1834738 - already processed (306/2605) 2025-12-01 13:12:26,523 [INFO] Skipping bill 1799528 - already processed (307/2605) 2025-12-01 13:12:26,523 [INFO] Processing 308/2605: Bill ID 1829539 2025-12-01 13:12:27,864 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:27,867 [ERROR] Failed to generate report for bill 1829539: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:28,878 [INFO] Skipping bill 1953506 - already processed (309/2605) 2025-12-01 13:12:28,879 [INFO] Skipping bill 1969171 - already processed (310/2605) 2025-12-01 13:12:28,879 [INFO] Skipping bill 1963529 - already processed (311/2605) 2025-12-01 13:12:28,879 [INFO] Skipping bill 1973172 - already processed (312/2605) 2025-12-01 13:12:28,880 [INFO] Skipping bill 1977164 - already processed (313/2605) 2025-12-01 13:12:28,880 [INFO] Skipping bill 1984764 - already processed (314/2605) 2025-12-01 13:12:28,880 [INFO] Skipping bill 1988421 - already processed (315/2605) 2025-12-01 13:12:28,880 [INFO] Skipping bill 1963407 - already processed (316/2605) 2025-12-01 13:12:28,880 [INFO] Skipping bill 1977647 - already processed (317/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1985537 - already processed (318/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1988809 - already processed (319/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1989241 - already processed (320/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1980688 - already processed (321/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1985490 - already processed (322/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1987236 - already processed (323/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 2009168 - already processed (324/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1985684 - already processed (325/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1982957 - already processed (326/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 2009660 - already processed (327/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1987290 - already processed (328/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 2021527 - already processed (329/2605) 2025-12-01 13:12:28,881 [INFO] Skipping bill 1984006 - already processed (330/2605) 2025-12-01 13:12:28,882 [INFO] Skipping bill 1944378 - already processed (331/2605) 2025-12-01 13:12:28,882 [INFO] Processing 332/2605: Bill ID 2016312 2025-12-01 13:12:30,286 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:30,288 [ERROR] Failed to generate report for bill 2016312: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:31,298 [INFO] Skipping bill 1975511 - already processed (333/2605) 2025-12-01 13:12:31,298 [INFO] Skipping bill 1807866 - already processed (334/2605) 2025-12-01 13:12:31,298 [INFO] Skipping bill 1825040 - already processed (335/2605) 2025-12-01 13:12:31,299 [INFO] Skipping bill 1824663 - already processed (336/2605) 2025-12-01 13:12:31,299 [INFO] Skipping bill 1827759 - already processed (337/2605) 2025-12-01 13:12:31,299 [INFO] Skipping bill 1807849 - already processed (338/2605) 2025-12-01 13:12:31,299 [INFO] Skipping bill 1852469 - already processed (339/2605) 2025-12-01 13:12:31,299 [INFO] Skipping bill 1724818 - already processed (340/2605) 2025-12-01 13:12:31,299 [INFO] Skipping bill 1827801 - already processed (341/2605) 2025-12-01 13:12:31,299 [INFO] Skipping bill 1842042 - already processed (342/2605) 2025-12-01 13:12:31,300 [INFO] Skipping bill 1800509 - already processed (343/2605) 2025-12-01 13:12:31,300 [INFO] Skipping bill 1829048 - already processed (344/2605) 2025-12-01 13:12:31,300 [INFO] Skipping bill 1691393 - already processed (345/2605) 2025-12-01 13:12:31,300 [INFO] Skipping bill 1684843 - already processed (346/2605) 2025-12-01 13:12:31,300 [INFO] Skipping bill 1945161 - already processed (347/2605) 2025-12-01 13:12:31,300 [INFO] Skipping bill 1947679 - already processed (348/2605) 2025-12-01 13:12:31,300 [INFO] Skipping bill 1943273 - already processed (349/2605) 2025-12-01 13:12:31,301 [INFO] Skipping bill 1919150 - already processed (350/2605) 2025-12-01 13:12:31,301 [INFO] Skipping bill 2012228 - already processed (351/2605) 2025-12-01 13:12:31,301 [INFO] Skipping bill 1990355 - already processed (352/2605) 2025-12-01 13:12:31,301 [INFO] Skipping bill 1960995 - already processed (353/2605) 2025-12-01 13:12:31,301 [INFO] Skipping bill 1968119 - already processed (354/2605) 2025-12-01 13:12:31,301 [INFO] Skipping bill 2006978 - already processed (355/2605) 2025-12-01 13:12:31,301 [INFO] Skipping bill 1974144 - already processed (356/2605) 2025-12-01 13:12:31,302 [INFO] Skipping bill 1974243 - already processed (357/2605) 2025-12-01 13:12:31,302 [INFO] Skipping bill 1974425 - already processed (358/2605) 2025-12-01 13:12:31,302 [INFO] Skipping bill 2016144 - already processed (359/2605) 2025-12-01 13:12:31,302 [INFO] Skipping bill 1974177 - already processed (360/2605) 2025-12-01 13:12:31,302 [INFO] Skipping bill 1974222 - already processed (361/2605) 2025-12-01 13:12:31,302 [INFO] Skipping bill 1974239 - already processed (362/2605) 2025-12-01 13:12:31,302 [INFO] Skipping bill 1974292 - already processed (363/2605) 2025-12-01 13:12:31,303 [INFO] Skipping bill 1974356 - already processed (364/2605) 2025-12-01 13:12:31,303 [INFO] Skipping bill 1974381 - already processed (365/2605) 2025-12-01 13:12:31,303 [INFO] Skipping bill 1974418 - already processed (366/2605) 2025-12-01 13:12:31,303 [INFO] Skipping bill 1990318 - already processed (367/2605) 2025-12-01 13:12:31,303 [INFO] Skipping bill 1987837 - already processed (368/2605) 2025-12-01 13:12:31,303 [INFO] Skipping bill 1974421 - already processed (369/2605) 2025-12-01 13:12:31,303 [INFO] Skipping bill 1982057 - already processed (370/2605) 2025-12-01 13:12:31,303 [INFO] Skipping bill 1968164 - already processed (371/2605) 2025-12-01 13:12:31,304 [INFO] Skipping bill 1979990 - already processed (372/2605) 2025-12-01 13:12:31,304 [INFO] Skipping bill 1961023 - already processed (373/2605) 2025-12-01 13:12:31,304 [INFO] Skipping bill 1970366 - already processed (374/2605) 2025-12-01 13:12:31,304 [INFO] Skipping bill 1976266 - already processed (375/2605) 2025-12-01 13:12:31,304 [INFO] Skipping bill 1735435 - already processed (376/2605) 2025-12-01 13:12:31,304 [INFO] Skipping bill 1735103 - already processed (377/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1735239 - already processed (378/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1676639 - already processed (379/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1822936 - already processed (380/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1824099 - already processed (381/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1823066 - already processed (382/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1821100 - already processed (383/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1821376 - already processed (384/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1861884 - already processed (385/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1862091 - already processed (386/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1824408 - already processed (387/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1823094 - already processed (388/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1859976 - already processed (389/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1860020 - already processed (390/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1822457 - already processed (391/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1823240 - already processed (392/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1822425 - already processed (393/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1823305 - already processed (394/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1816605 - already processed (395/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1822519 - already processed (396/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1822760 - already processed (397/2605) 2025-12-01 13:12:31,305 [INFO] Skipping bill 1821542 - already processed (398/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1862395 - already processed (399/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1862180 - already processed (400/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1820992 - already processed (401/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1822908 - already processed (402/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1816124 - already processed (403/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1826161 - already processed (404/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1822451 - already processed (405/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1823328 - already processed (406/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1860844 - already processed (407/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1819671 - already processed (408/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1815658 - already processed (409/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1929168 - already processed (410/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1939103 - already processed (411/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1939150 - already processed (412/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1924410 - already processed (413/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1929804 - already processed (414/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1929561 - already processed (415/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1925992 - already processed (416/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1928926 - already processed (417/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1931961 - already processed (418/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1929636 - already processed (419/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1909994 - already processed (420/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1928408 - already processed (421/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1928598 - already processed (422/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1994243 - already processed (423/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1994303 - already processed (424/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1929659 - already processed (425/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1932766 - already processed (426/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1928570 - already processed (427/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1934608 - already processed (428/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1928364 - already processed (429/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1929760 - already processed (430/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1933272 - already processed (431/2605) 2025-12-01 13:12:31,306 [INFO] Skipping bill 1929496 - already processed (432/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1990347 - already processed (433/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1995251 - already processed (434/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1995449 - already processed (435/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1995259 - already processed (436/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1995271 - already processed (437/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1995747 - already processed (438/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1991557 - already processed (439/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1991563 - already processed (440/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1995783 - already processed (441/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1929457 - already processed (442/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1915997 - already processed (443/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1933178 - already processed (444/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1992758 - already processed (445/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1993026 - already processed (446/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1995569 - already processed (447/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1992805 - already processed (448/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1995900 - already processed (449/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1993019 - already processed (450/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1847870 - already processed (451/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1812600 - already processed (452/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1848008 - already processed (453/2605) 2025-12-01 13:12:31,307 [INFO] Skipping bill 1825516 - already processed (454/2605) 2025-12-01 13:12:31,307 [INFO] Processing 455/2605: Bill ID 1845026 2025-12-01 13:12:32,841 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:32,844 [ERROR] Failed to generate report for bill 1845026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:33,855 [INFO] Skipping bill 1962312 - already processed (456/2605) 2025-12-01 13:12:33,856 [INFO] Skipping bill 1954011 - already processed (457/2605) 2025-12-01 13:12:33,856 [INFO] Skipping bill 1991380 - already processed (458/2605) 2025-12-01 13:12:33,856 [INFO] Processing 459/2605: Bill ID 2011846 2025-12-01 13:12:34,314 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:34,316 [ERROR] Failed to generate report for bill 2011846: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:35,325 [INFO] Skipping bill 1838778 - already processed (460/2605) 2025-12-01 13:12:35,326 [INFO] Skipping bill 1713666 - already processed (461/2605) 2025-12-01 13:12:35,327 [INFO] Skipping bill 1837146 - already processed (462/2605) 2025-12-01 13:12:35,327 [INFO] Skipping bill 1842401 - already processed (463/2605) 2025-12-01 13:12:35,327 [INFO] Skipping bill 1838992 - already processed (464/2605) 2025-12-01 13:12:35,327 [INFO] Skipping bill 1840748 - already processed (465/2605) 2025-12-01 13:12:35,327 [INFO] Skipping bill 1841780 - already processed (466/2605) 2025-12-01 13:12:35,327 [INFO] Skipping bill 1831504 - already processed (467/2605) 2025-12-01 13:12:35,328 [INFO] Skipping bill 1832905 - already processed (468/2605) 2025-12-01 13:12:35,328 [INFO] Skipping bill 1843072 - already processed (469/2605) 2025-12-01 13:12:35,328 [INFO] Skipping bill 1839869 - already processed (470/2605) 2025-12-01 13:12:35,328 [INFO] Skipping bill 1814012 - already processed (471/2605) 2025-12-01 13:12:35,328 [INFO] Skipping bill 1842520 - already processed (472/2605) 2025-12-01 13:12:35,328 [INFO] Skipping bill 1835262 - already processed (473/2605) 2025-12-01 13:12:35,328 [INFO] Skipping bill 1843020 - already processed (474/2605) 2025-12-01 13:12:35,329 [INFO] Skipping bill 1878243 - already processed (475/2605) 2025-12-01 13:12:35,329 [INFO] Skipping bill 1893072 - already processed (476/2605) 2025-12-01 13:12:35,329 [INFO] Skipping bill 1713755 - already processed (477/2605) 2025-12-01 13:12:35,329 [INFO] Skipping bill 1842316 - already processed (478/2605) 2025-12-01 13:12:35,329 [INFO] Skipping bill 1838852 - already processed (479/2605) 2025-12-01 13:12:35,329 [INFO] Skipping bill 1838748 - already processed (480/2605) 2025-12-01 13:12:35,329 [INFO] Skipping bill 1635340 - already processed (481/2605) 2025-12-01 13:12:35,329 [INFO] Skipping bill 1713127 - already processed (482/2605) 2025-12-01 13:12:35,330 [INFO] Skipping bill 1818470 - already processed (483/2605) 2025-12-01 13:12:35,330 [INFO] Skipping bill 1837189 - already processed (484/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1635556 - already processed (485/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1692465 - already processed (486/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1843326 - already processed (487/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1822203 - already processed (488/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1838434 - already processed (489/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1714042 - already processed (490/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1840824 - already processed (491/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1810043 - already processed (492/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1762665 - already processed (493/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1831619 - already processed (494/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1712988 - already processed (495/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1704077 - already processed (496/2605) 2025-12-01 13:12:35,331 [INFO] Skipping bill 1712903 - already processed (497/2605) 2025-12-01 13:12:35,332 [INFO] Skipping bill 1818714 - already processed (498/2605) 2025-12-01 13:12:35,332 [INFO] Skipping bill 1842743 - already processed (499/2605) 2025-12-01 13:12:35,332 [INFO] Processing 500/2605: Bill ID 1838518 2025-12-01 13:12:37,900 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:37,902 [ERROR] Failed to generate report for bill 1838518: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:37,990 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:12:37,990 [INFO] Progress: 500/2605 - Processed: 0, Skipped: 488, Errors: 12 2025-12-01 13:12:38,995 [INFO] Processing 501/2605: Bill ID 1794181 2025-12-01 13:12:39,538 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:39,540 [ERROR] Failed to generate report for bill 1794181: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:40,547 [INFO] Processing 502/2605: Bill ID 1708593 2025-12-01 13:12:42,378 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:42,382 [ERROR] Failed to generate report for bill 1708593: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:43,392 [INFO] Processing 503/2605: Bill ID 1704148 2025-12-01 13:12:45,377 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:45,379 [ERROR] Failed to generate report for bill 1704148: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:46,390 [INFO] Processing 504/2605: Bill ID 1704278 2025-12-01 13:12:48,550 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:48,554 [ERROR] Failed to generate report for bill 1704278: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:49,564 [INFO] Skipping bill 1714051 - already processed (505/2605) 2025-12-01 13:12:49,564 [INFO] Skipping bill 1951980 - already processed (506/2605) 2025-12-01 13:12:49,565 [INFO] Skipping bill 1942546 - already processed (507/2605) 2025-12-01 13:12:49,565 [INFO] Skipping bill 1954662 - already processed (508/2605) 2025-12-01 13:12:49,565 [INFO] Skipping bill 1962278 - already processed (509/2605) 2025-12-01 13:12:49,565 [INFO] Skipping bill 1959604 - already processed (510/2605) 2025-12-01 13:12:49,565 [INFO] Skipping bill 1961963 - already processed (511/2605) 2025-12-01 13:12:49,565 [INFO] Skipping bill 1906420 - already processed (512/2605) 2025-12-01 13:12:49,565 [INFO] Skipping bill 1959700 - already processed (513/2605) 2025-12-01 13:12:49,566 [INFO] Skipping bill 1960223 - already processed (514/2605) 2025-12-01 13:12:49,566 [INFO] Skipping bill 1955104 - already processed (515/2605) 2025-12-01 13:12:49,566 [INFO] Skipping bill 1962582 - already processed (516/2605) 2025-12-01 13:12:49,566 [INFO] Skipping bill 1945671 - already processed (517/2605) 2025-12-01 13:12:49,566 [INFO] Skipping bill 1927329 - already processed (518/2605) 2025-12-01 13:12:49,566 [INFO] Skipping bill 1950703 - already processed (519/2605) 2025-12-01 13:12:49,566 [INFO] Skipping bill 1962488 - already processed (520/2605) 2025-12-01 13:12:49,567 [INFO] Skipping bill 1945525 - already processed (521/2605) 2025-12-01 13:12:49,567 [INFO] Skipping bill 1958920 - already processed (522/2605) 2025-12-01 13:12:49,567 [INFO] Skipping bill 1962097 - already processed (523/2605) 2025-12-01 13:12:49,567 [INFO] Skipping bill 1963192 - already processed (524/2605) 2025-12-01 13:12:49,567 [INFO] Skipping bill 1947169 - already processed (525/2605) 2025-12-01 13:12:49,567 [INFO] Skipping bill 1961929 - already processed (526/2605) 2025-12-01 13:12:49,568 [INFO] Skipping bill 1962057 - already processed (527/2605) 2025-12-01 13:12:49,568 [INFO] Skipping bill 1973797 - already processed (528/2605) 2025-12-01 13:12:49,568 [INFO] Skipping bill 1963087 - already processed (529/2605) 2025-12-01 13:12:49,568 [INFO] Skipping bill 1940139 - already processed (530/2605) 2025-12-01 13:12:49,568 [INFO] Skipping bill 1941211 - already processed (531/2605) 2025-12-01 13:12:49,568 [INFO] Skipping bill 1906434 - already processed (532/2605) 2025-12-01 13:12:49,568 [INFO] Skipping bill 1963178 - already processed (533/2605) 2025-12-01 13:12:49,568 [INFO] Skipping bill 1954188 - already processed (534/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1954475 - already processed (535/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1957381 - already processed (536/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1962329 - already processed (537/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1962675 - already processed (538/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1935756 - already processed (539/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1945467 - already processed (540/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1907066 - already processed (541/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1985138 - already processed (542/2605) 2025-12-01 13:12:49,569 [INFO] Skipping bill 1961501 - already processed (543/2605) 2025-12-01 13:12:49,570 [INFO] Skipping bill 1962291 - already processed (544/2605) 2025-12-01 13:12:49,570 [INFO] Skipping bill 2034790 - already processed (545/2605) 2025-12-01 13:12:49,570 [INFO] Skipping bill 2047690 - already processed (546/2605) 2025-12-01 13:12:49,570 [INFO] Skipping bill 2052256 - already processed (547/2605) 2025-12-01 13:12:49,570 [INFO] Skipping bill 1962885 - already processed (548/2605) 2025-12-01 13:12:49,570 [INFO] Skipping bill 1960413 - already processed (549/2605) 2025-12-01 13:12:49,570 [INFO] Skipping bill 1959956 - already processed (550/2605) 2025-12-01 13:12:49,570 [INFO] Processing 551/2605: Bill ID 1962986 2025-12-01 13:12:52,954 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:52,957 [ERROR] Failed to generate report for bill 1962986: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:53,969 [INFO] Processing 552/2605: Bill ID 1960510 2025-12-01 13:12:54,501 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:54,503 [ERROR] Failed to generate report for bill 1960510: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:55,514 [INFO] Skipping bill 1962952 - already processed (553/2605) 2025-12-01 13:12:55,515 [INFO] Processing 554/2605: Bill ID 1645841 2025-12-01 13:12:56,131 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:56,133 [ERROR] Failed to generate report for bill 1645841: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:57,144 [INFO] Skipping bill 1799709 - already processed (555/2605) 2025-12-01 13:12:57,144 [INFO] Skipping bill 1797422 - already processed (556/2605) 2025-12-01 13:12:57,145 [INFO] Skipping bill 1801018 - already processed (557/2605) 2025-12-01 13:12:57,145 [INFO] Skipping bill 1799688 - already processed (558/2605) 2025-12-01 13:12:57,145 [INFO] Skipping bill 1909475 - already processed (559/2605) 2025-12-01 13:12:57,145 [INFO] Skipping bill 1921138 - already processed (560/2605) 2025-12-01 13:12:57,145 [INFO] Skipping bill 1917007 - already processed (561/2605) 2025-12-01 13:12:57,145 [INFO] Skipping bill 1921879 - already processed (562/2605) 2025-12-01 13:12:57,145 [INFO] Skipping bill 1915249 - already processed (563/2605) 2025-12-01 13:12:57,146 [INFO] Skipping bill 1912345 - already processed (564/2605) 2025-12-01 13:12:57,146 [INFO] Processing 565/2605: Bill ID 1897676 2025-12-01 13:12:57,765 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:57,767 [ERROR] Failed to generate report for bill 1897676: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:12:58,776 [INFO] Skipping bill 1847772 - already processed (566/2605) 2025-12-01 13:12:58,780 [INFO] Skipping bill 1825218 - already processed (567/2605) 2025-12-01 13:12:58,781 [INFO] Skipping bill 1839463 - already processed (568/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1665194 - already processed (569/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1708118 - already processed (570/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1802090 - already processed (571/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1823725 - already processed (572/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1845657 - already processed (573/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1846612 - already processed (574/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1870077 - already processed (575/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1870897 - already processed (576/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1761153 - already processed (577/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1760883 - already processed (578/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1752922 - already processed (579/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1873484 - already processed (580/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1990915 - already processed (581/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1969038 - already processed (582/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1993838 - already processed (583/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1958795 - already processed (584/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1977734 - already processed (585/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1937592 - already processed (586/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1963811 - already processed (587/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 2029033 - already processed (588/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 2026836 - already processed (589/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 2027180 - already processed (590/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 2021349 - already processed (591/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 2030059 - already processed (592/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1823829 - already processed (593/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1824037 - already processed (594/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1850989 - already processed (595/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1826921 - already processed (596/2605) 2025-12-01 13:12:58,782 [INFO] Skipping bill 1690087 - already processed (597/2605) 2025-12-01 13:12:58,782 [INFO] Processing 598/2605: Bill ID 1693524 2025-12-01 13:12:59,508 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:12:59,510 [ERROR] Failed to generate report for bill 1693524: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:00,520 [INFO] Skipping bill 1665637 - already processed (599/2605) 2025-12-01 13:13:00,521 [INFO] Skipping bill 1682635 - already processed (600/2605) 2025-12-01 13:13:00,521 [INFO] Processing 601/2605: Bill ID 1692213 2025-12-01 13:13:01,220 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:01,222 [ERROR] Failed to generate report for bill 1692213: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:02,230 [INFO] Processing 602/2605: Bill ID 1846626 2025-12-01 13:13:02,988 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:02,991 [ERROR] Failed to generate report for bill 1846626: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:04,003 [INFO] Processing 603/2605: Bill ID 1846675 2025-12-01 13:13:04,728 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:04,733 [ERROR] Failed to generate report for bill 1846675: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:05,744 [INFO] Skipping bill 1653927 - already processed (604/2605) 2025-12-01 13:13:05,745 [INFO] Skipping bill 1959326 - already processed (605/2605) 2025-12-01 13:13:05,746 [INFO] Skipping bill 1948632 - already processed (606/2605) 2025-12-01 13:13:05,746 [INFO] Skipping bill 1955060 - already processed (607/2605) 2025-12-01 13:13:05,746 [INFO] Skipping bill 1946546 - already processed (608/2605) 2025-12-01 13:13:05,746 [INFO] Processing 609/2605: Bill ID 1916487 2025-12-01 13:13:06,572 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:06,574 [ERROR] Failed to generate report for bill 1916487: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:07,584 [INFO] Skipping bill 1949165 - already processed (610/2605) 2025-12-01 13:13:07,585 [INFO] Processing 611/2605: Bill ID 1938020 2025-12-01 13:13:08,415 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:08,418 [ERROR] Failed to generate report for bill 1938020: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:09,429 [INFO] Processing 612/2605: Bill ID 1937464 2025-12-01 13:13:10,168 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:10,170 [ERROR] Failed to generate report for bill 1937464: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:11,182 [INFO] Processing 613/2605: Bill ID 1713253 2025-12-01 13:13:11,794 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:11,797 [ERROR] Failed to generate report for bill 1713253: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:12,807 [INFO] Skipping bill 1804283 - already processed (614/2605) 2025-12-01 13:13:12,808 [INFO] Skipping bill 1795473 - already processed (615/2605) 2025-12-01 13:13:12,808 [INFO] Skipping bill 1855405 - already processed (616/2605) 2025-12-01 13:13:12,809 [INFO] Skipping bill 1848823 - already processed (617/2605) 2025-12-01 13:13:12,809 [INFO] Skipping bill 1842483 - already processed (618/2605) 2025-12-01 13:13:12,809 [INFO] Skipping bill 1854786 - already processed (619/2605) 2025-12-01 13:13:12,809 [INFO] Skipping bill 1795485 - already processed (620/2605) 2025-12-01 13:13:12,809 [INFO] Skipping bill 1854739 - already processed (621/2605) 2025-12-01 13:13:12,809 [INFO] Skipping bill 1799043 - already processed (622/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1974284 - already processed (623/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1974163 - already processed (624/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1994222 - already processed (625/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1970124 - already processed (626/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1908054 - already processed (627/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1904666 - already processed (628/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1975714 - already processed (629/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1974214 - already processed (630/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1765786 - already processed (631/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1751941 - already processed (632/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1747213 - already processed (633/2605) 2025-12-01 13:13:12,810 [INFO] Skipping bill 1872579 - already processed (634/2605) 2025-12-01 13:13:12,811 [INFO] Skipping bill 1831630 - already processed (635/2605) 2025-12-01 13:13:12,811 [INFO] Skipping bill 1869553 - already processed (636/2605) 2025-12-01 13:13:12,811 [INFO] Skipping bill 1856482 - already processed (637/2605) 2025-12-01 13:13:12,811 [INFO] Skipping bill 1877177 - already processed (638/2605) 2025-12-01 13:13:12,811 [INFO] Skipping bill 1856535 - already processed (639/2605) 2025-12-01 13:13:12,811 [INFO] Processing 640/2605: Bill ID 1856106 2025-12-01 13:13:13,331 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:13,333 [ERROR] Failed to generate report for bill 1856106: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:13,386 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:13:13,386 [INFO] Progress: 640/2605 - Processed: 0, Skipped: 611, Errors: 29 2025-12-01 13:13:14,388 [INFO] Skipping bill 2036140 - already processed (641/2605) 2025-12-01 13:13:14,389 [INFO] Skipping bill 2013841 - already processed (642/2605) 2025-12-01 13:13:14,389 [INFO] Skipping bill 2036152 - already processed (643/2605) 2025-12-01 13:13:14,389 [INFO] Skipping bill 2035054 - already processed (644/2605) 2025-12-01 13:13:14,389 [INFO] Skipping bill 2020836 - already processed (645/2605) 2025-12-01 13:13:14,389 [INFO] Skipping bill 2034414 - already processed (646/2605) 2025-12-01 13:13:14,389 [INFO] Skipping bill 2036147 - already processed (647/2605) 2025-12-01 13:13:14,390 [INFO] Skipping bill 2017245 - already processed (648/2605) 2025-12-01 13:13:14,390 [INFO] Processing 649/2605: Bill ID 2020366 2025-12-01 13:13:14,866 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:14,868 [ERROR] Failed to generate report for bill 2020366: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:15,878 [INFO] Skipping bill 1754734 - already processed (650/2605) 2025-12-01 13:13:15,880 [INFO] Skipping bill 1766525 - already processed (651/2605) 2025-12-01 13:13:15,880 [INFO] Skipping bill 1993701 - already processed (652/2605) 2025-12-01 13:13:15,880 [INFO] Skipping bill 2024454 - already processed (653/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 1989654 - already processed (654/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 1923257 - already processed (655/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 2012930 - already processed (656/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 2022043 - already processed (657/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 1977885 - already processed (658/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 1903898 - already processed (659/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 2022085 - already processed (660/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 2024471 - already processed (661/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 1962449 - already processed (662/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 1948585 - already processed (663/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 2027763 - already processed (664/2605) 2025-12-01 13:13:15,881 [INFO] Skipping bill 2038183 - already processed (665/2605) 2025-12-01 13:13:15,882 [INFO] Skipping bill 2012908 - already processed (666/2605) 2025-12-01 13:13:15,882 [INFO] Skipping bill 1703457 - already processed (667/2605) 2025-12-01 13:13:15,882 [INFO] Skipping bill 1703326 - already processed (668/2605) 2025-12-01 13:13:15,882 [INFO] Skipping bill 1703583 - already processed (669/2605) 2025-12-01 13:13:15,882 [INFO] Skipping bill 1703488 - already processed (670/2605) 2025-12-01 13:13:15,882 [INFO] Skipping bill 1694229 - already processed (671/2605) 2025-12-01 13:13:15,882 [INFO] Skipping bill 1697293 - already processed (672/2605) 2025-12-01 13:13:15,882 [INFO] Skipping bill 1694179 - already processed (673/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1707790 - already processed (674/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1691409 - already processed (675/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1679149 - already processed (676/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1697468 - already processed (677/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1703148 - already processed (678/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1835739 - already processed (679/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1840482 - already processed (680/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1842215 - already processed (681/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1838035 - already processed (682/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1842106 - already processed (683/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1839236 - already processed (684/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1839142 - already processed (685/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1838028 - already processed (686/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1837867 - already processed (687/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1835606 - already processed (688/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1825025 - already processed (689/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1826297 - already processed (690/2605) 2025-12-01 13:13:15,883 [INFO] Skipping bill 1847549 - already processed (691/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1839307 - already processed (692/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1842129 - already processed (693/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1837909 - already processed (694/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1797714 - already processed (695/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1839204 - already processed (696/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1835710 - already processed (697/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1837838 - already processed (698/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1837893 - already processed (699/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1835695 - already processed (700/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1837995 - already processed (701/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1842172 - already processed (702/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1817737 - already processed (703/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1953268 - already processed (704/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1961326 - already processed (705/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1961123 - already processed (706/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1953218 - already processed (707/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1945231 - already processed (708/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1949851 - already processed (709/2605) 2025-12-01 13:13:15,884 [INFO] Skipping bill 1945281 - already processed (710/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1945285 - already processed (711/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1949794 - already processed (712/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1949746 - already processed (713/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1949835 - already processed (714/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1961190 - already processed (715/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1953113 - already processed (716/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1936713 - already processed (717/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1939378 - already processed (718/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1909925 - already processed (719/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1961341 - already processed (720/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1922403 - already processed (721/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1899660 - already processed (722/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1961327 - already processed (723/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1953223 - already processed (724/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1953246 - already processed (725/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1955835 - already processed (726/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1933617 - already processed (727/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1945335 - already processed (728/2605) 2025-12-01 13:13:15,885 [INFO] Skipping bill 1961410 - already processed (729/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1926508 - already processed (730/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1943426 - already processed (731/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1949808 - already processed (732/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1949848 - already processed (733/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1947517 - already processed (734/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1945267 - already processed (735/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1961205 - already processed (736/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1953214 - already processed (737/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1943446 - already processed (738/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1973042 - already processed (739/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1961299 - already processed (740/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1933601 - already processed (741/2605) 2025-12-01 13:13:15,886 [INFO] Skipping bill 1933621 - already processed (742/2605) 2025-12-01 13:13:15,886 [INFO] Processing 743/2605: Bill ID 1919287 2025-12-01 13:13:16,403 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:16,405 [ERROR] Failed to generate report for bill 1919287: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:17,415 [INFO] Skipping bill 1933460 - already processed (744/2605) 2025-12-01 13:13:17,416 [INFO] Skipping bill 1933670 - already processed (745/2605) 2025-12-01 13:13:17,416 [INFO] Skipping bill 1922377 - already processed (746/2605) 2025-12-01 13:13:17,416 [INFO] Skipping bill 1735361 - already processed (747/2605) 2025-12-01 13:13:17,416 [INFO] Skipping bill 1742559 - already processed (748/2605) 2025-12-01 13:13:17,416 [INFO] Skipping bill 1775856 - already processed (749/2605) 2025-12-01 13:13:17,416 [INFO] Skipping bill 1738097 - already processed (750/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1794760 - already processed (751/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1736131 - already processed (752/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1885778 - already processed (753/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1808592 - already processed (754/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1878825 - already processed (755/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1884638 - already processed (756/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1738996 - already processed (757/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1878228 - already processed (758/2605) 2025-12-01 13:13:17,417 [INFO] Skipping bill 1872865 - already processed (759/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1881167 - already processed (760/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1881743 - already processed (761/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1852772 - already processed (762/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1884104 - already processed (763/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1738794 - already processed (764/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1893080 - already processed (765/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1881922 - already processed (766/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1883178 - already processed (767/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1881587 - already processed (768/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1884487 - already processed (769/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1859182 - already processed (770/2605) 2025-12-01 13:13:17,418 [INFO] Skipping bill 1866861 - already processed (771/2605) 2025-12-01 13:13:17,418 [INFO] Processing 772/2605: Bill ID 1891836 2025-12-01 13:13:18,045 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:18,046 [ERROR] Failed to generate report for bill 1891836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:19,055 [INFO] Skipping bill 1883738 - already processed (773/2605) 2025-12-01 13:13:19,056 [INFO] Skipping bill 1682652 - already processed (774/2605) 2025-12-01 13:13:19,056 [INFO] Skipping bill 1742464 - already processed (775/2605) 2025-12-01 13:13:19,057 [INFO] Skipping bill 1728366 - already processed (776/2605) 2025-12-01 13:13:19,057 [INFO] Skipping bill 1726524 - already processed (777/2605) 2025-12-01 13:13:19,057 [INFO] Skipping bill 1737208 - already processed (778/2605) 2025-12-01 13:13:19,057 [INFO] Skipping bill 1749398 - already processed (779/2605) 2025-12-01 13:13:19,057 [INFO] Skipping bill 1738008 - already processed (780/2605) 2025-12-01 13:13:19,057 [INFO] Skipping bill 1735894 - already processed (781/2605) 2025-12-01 13:13:19,057 [INFO] Skipping bill 1841416 - already processed (782/2605) 2025-12-01 13:13:19,058 [INFO] Skipping bill 1736739 - already processed (783/2605) 2025-12-01 13:13:19,058 [INFO] Skipping bill 1737586 - already processed (784/2605) 2025-12-01 13:13:19,058 [INFO] Skipping bill 1884557 - already processed (785/2605) 2025-12-01 13:13:19,058 [INFO] Processing 786/2605: Bill ID 1875094 2025-12-01 13:13:19,981 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:19,984 [ERROR] Failed to generate report for bill 1875094: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:20,995 [INFO] Processing 787/2605: Bill ID 1755026 2025-12-01 13:13:21,754 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:21,757 [ERROR] Failed to generate report for bill 1755026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:22,763 [INFO] Processing 788/2605: Bill ID 1871591 2025-12-01 13:13:23,572 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:23,574 [ERROR] Failed to generate report for bill 1871591: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:24,586 [INFO] Processing 789/2605: Bill ID 1760451 2025-12-01 13:13:25,619 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:25,622 [ERROR] Failed to generate report for bill 1760451: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:26,633 [INFO] Processing 790/2605: Bill ID 1880948 2025-12-01 13:13:27,771 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:27,772 [ERROR] Failed to generate report for bill 1880948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:27,825 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:13:27,825 [INFO] Progress: 790/2605 - Processed: 0, Skipped: 753, Errors: 37 2025-12-01 13:13:28,830 [INFO] Processing 791/2605: Bill ID 1775764 2025-12-01 13:13:30,022 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:30,025 [ERROR] Failed to generate report for bill 1775764: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:31,035 [INFO] Processing 792/2605: Bill ID 1884634 2025-12-01 13:13:32,075 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:32,077 [ERROR] Failed to generate report for bill 1884634: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:33,088 [INFO] Skipping bill 2000828 - already processed (793/2605) 2025-12-01 13:13:33,089 [INFO] Skipping bill 2001551 - already processed (794/2605) 2025-12-01 13:13:33,089 [INFO] Skipping bill 1997130 - already processed (795/2605) 2025-12-01 13:13:33,089 [INFO] Skipping bill 2046647 - already processed (796/2605) 2025-12-01 13:13:33,089 [INFO] Skipping bill 2004206 - already processed (797/2605) 2025-12-01 13:13:33,090 [INFO] Skipping bill 1998184 - already processed (798/2605) 2025-12-01 13:13:33,090 [INFO] Skipping bill 2002506 - already processed (799/2605) 2025-12-01 13:13:33,090 [INFO] Skipping bill 2002695 - already processed (800/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2047070 - already processed (801/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2002923 - already processed (802/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 1998946 - already processed (803/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 1997259 - already processed (804/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2001269 - already processed (805/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2000625 - already processed (806/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2002705 - already processed (807/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2046676 - already processed (808/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2046660 - already processed (809/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2003933 - already processed (810/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 1997268 - already processed (811/2605) 2025-12-01 13:13:33,091 [INFO] Skipping bill 2019724 - already processed (812/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 1997990 - already processed (813/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 1998675 - already processed (814/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 2002243 - already processed (815/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 1997584 - already processed (816/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 2002929 - already processed (817/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 2001175 - already processed (818/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 1998815 - already processed (819/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 1998575 - already processed (820/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 1999210 - already processed (821/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 2001320 - already processed (822/2605) 2025-12-01 13:13:33,092 [INFO] Skipping bill 2053304 - already processed (823/2605) 2025-12-01 13:13:33,093 [INFO] Skipping bill 2001993 - already processed (824/2605) 2025-12-01 13:13:33,093 [INFO] Skipping bill 1999288 - already processed (825/2605) 2025-12-01 13:13:33,093 [INFO] Skipping bill 1998331 - already processed (826/2605) 2025-12-01 13:13:33,093 [INFO] Skipping bill 2003746 - already processed (827/2605) 2025-12-01 13:13:33,093 [INFO] Skipping bill 1927181 - already processed (828/2605) 2025-12-01 13:13:33,093 [INFO] Skipping bill 2030259 - already processed (829/2605) 2025-12-01 13:13:33,093 [INFO] Skipping bill 1997622 - already processed (830/2605) 2025-12-01 13:13:33,093 [INFO] Processing 831/2605: Bill ID 2028594 2025-12-01 13:13:34,016 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:34,019 [ERROR] Failed to generate report for bill 2028594: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:35,031 [INFO] Processing 832/2605: Bill ID 2038620 2025-12-01 13:13:36,065 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:36,067 [ERROR] Failed to generate report for bill 2038620: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:37,078 [INFO] Processing 833/2605: Bill ID 2024637 2025-12-01 13:13:40,365 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:40,367 [ERROR] Failed to generate report for bill 2024637: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:41,378 [INFO] Skipping bill 1780182 - already processed (834/2605) 2025-12-01 13:13:41,379 [INFO] Skipping bill 1895692 - already processed (835/2605) 2025-12-01 13:13:41,379 [INFO] Skipping bill 1780190 - already processed (836/2605) 2025-12-01 13:13:41,380 [INFO] Skipping bill 1780196 - already processed (837/2605) 2025-12-01 13:13:41,380 [INFO] Skipping bill 1780166 - already processed (838/2605) 2025-12-01 13:13:41,380 [INFO] Skipping bill 1888099 - already processed (839/2605) 2025-12-01 13:13:41,380 [INFO] Skipping bill 1852983 - already processed (840/2605) 2025-12-01 13:13:41,380 [INFO] Skipping bill 1852813 - already processed (841/2605) 2025-12-01 13:13:41,380 [INFO] Skipping bill 2037995 - already processed (842/2605) 2025-12-01 13:13:41,381 [INFO] Skipping bill 2043787 - already processed (843/2605) 2025-12-01 13:13:41,381 [INFO] Skipping bill 2035241 - already processed (844/2605) 2025-12-01 13:13:41,381 [INFO] Skipping bill 2035278 - already processed (845/2605) 2025-12-01 13:13:41,381 [INFO] Skipping bill 2038014 - already processed (846/2605) 2025-12-01 13:13:41,381 [INFO] Skipping bill 2009885 - already processed (847/2605) 2025-12-01 13:13:41,381 [INFO] Skipping bill 2035768 - already processed (848/2605) 2025-12-01 13:13:41,381 [INFO] Skipping bill 2025453 - already processed (849/2605) 2025-12-01 13:13:41,382 [INFO] Skipping bill 2038856 - already processed (850/2605) 2025-12-01 13:13:41,382 [INFO] Skipping bill 2009892 - already processed (851/2605) 2025-12-01 13:13:41,382 [INFO] Skipping bill 1861260 - already processed (852/2605) 2025-12-01 13:13:41,382 [INFO] Skipping bill 1856334 - already processed (853/2605) 2025-12-01 13:13:41,382 [INFO] Skipping bill 1856821 - already processed (854/2605) 2025-12-01 13:13:41,382 [INFO] Skipping bill 1864646 - already processed (855/2605) 2025-12-01 13:13:41,382 [INFO] Skipping bill 1860647 - already processed (856/2605) 2025-12-01 13:13:41,383 [INFO] Skipping bill 1707979 - already processed (857/2605) 2025-12-01 13:13:41,383 [INFO] Skipping bill 1643078 - already processed (858/2605) 2025-12-01 13:13:41,383 [INFO] Skipping bill 1651590 - already processed (859/2605) 2025-12-01 13:13:41,383 [INFO] Skipping bill 1852405 - already processed (860/2605) 2025-12-01 13:13:41,383 [INFO] Skipping bill 1852812 - already processed (861/2605) 2025-12-01 13:13:41,383 [INFO] Skipping bill 1858711 - already processed (862/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1853103 - already processed (863/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1851979 - already processed (864/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1859186 - already processed (865/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1740589 - already processed (866/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1741802 - already processed (867/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1860410 - already processed (868/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1957720 - already processed (869/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1974786 - already processed (870/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1989670 - already processed (871/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1979597 - already processed (872/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1984757 - already processed (873/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 2009204 - already processed (874/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 2015254 - already processed (875/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1974962 - already processed (876/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 2009276 - already processed (877/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1989103 - already processed (878/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1984950 - already processed (879/2605) 2025-12-01 13:13:41,384 [INFO] Skipping bill 1975975 - already processed (880/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2004610 - already processed (881/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2004938 - already processed (882/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1992603 - already processed (883/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1992640 - already processed (884/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1996293 - already processed (885/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2011831 - already processed (886/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2012661 - already processed (887/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1950967 - already processed (888/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1994787 - already processed (889/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2011159 - already processed (890/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2006411 - already processed (891/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2011256 - already processed (892/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2004789 - already processed (893/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1981280 - already processed (894/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 2009071 - already processed (895/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1967748 - already processed (896/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1707150 - already processed (897/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1669781 - already processed (898/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1643012 - already processed (899/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1848903 - already processed (900/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1848260 - already processed (901/2605) 2025-12-01 13:13:41,385 [INFO] Skipping bill 1820844 - already processed (902/2605) 2025-12-01 13:13:41,386 [INFO] Skipping bill 1851922 - already processed (903/2605) 2025-12-01 13:13:41,386 [INFO] Skipping bill 1850740 - already processed (904/2605) 2025-12-01 13:13:41,386 [INFO] Skipping bill 1838535 - already processed (905/2605) 2025-12-01 13:13:41,386 [INFO] Skipping bill 1851828 - already processed (906/2605) 2025-12-01 13:13:41,386 [INFO] Skipping bill 1863177 - already processed (907/2605) 2025-12-01 13:13:41,386 [INFO] Skipping bill 1852015 - already processed (908/2605) 2025-12-01 13:13:41,386 [INFO] Skipping bill 1818886 - already processed (909/2605) 2025-12-01 13:13:41,386 [INFO] Skipping bill 1852513 - already processed (910/2605) 2025-12-01 13:13:41,386 [INFO] Processing 911/2605: Bill ID 1851836 2025-12-01 13:13:42,109 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:42,111 [ERROR] Failed to generate report for bill 1851836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:43,118 [INFO] Skipping bill 1933975 - already processed (912/2605) 2025-12-01 13:13:43,118 [INFO] Skipping bill 1935092 - already processed (913/2605) 2025-12-01 13:13:43,119 [INFO] Skipping bill 1937681 - already processed (914/2605) 2025-12-01 13:13:43,119 [INFO] Skipping bill 1927333 - already processed (915/2605) 2025-12-01 13:13:43,119 [INFO] Skipping bill 1936069 - already processed (916/2605) 2025-12-01 13:13:43,119 [INFO] Skipping bill 1940299 - already processed (917/2605) 2025-12-01 13:13:43,119 [INFO] Skipping bill 1911677 - already processed (918/2605) 2025-12-01 13:13:43,119 [INFO] Skipping bill 1929973 - already processed (919/2605) 2025-12-01 13:13:43,119 [INFO] Skipping bill 1910359 - already processed (920/2605) 2025-12-01 13:13:43,120 [INFO] Skipping bill 1934687 - already processed (921/2605) 2025-12-01 13:13:43,120 [INFO] Skipping bill 1930038 - already processed (922/2605) 2025-12-01 13:13:43,120 [INFO] Skipping bill 1925325 - already processed (923/2605) 2025-12-01 13:13:43,120 [INFO] Skipping bill 1933890 - already processed (924/2605) 2025-12-01 13:13:43,120 [INFO] Skipping bill 1934898 - already processed (925/2605) 2025-12-01 13:13:43,120 [INFO] Skipping bill 2034194 - already processed (926/2605) 2025-12-01 13:13:43,120 [INFO] Skipping bill 1972440 - already processed (927/2605) 2025-12-01 13:13:43,121 [INFO] Skipping bill 1934020 - already processed (928/2605) 2025-12-01 13:13:43,121 [INFO] Skipping bill 1912210 - already processed (929/2605) 2025-12-01 13:13:43,121 [INFO] Skipping bill 1634819 - already processed (930/2605) 2025-12-01 13:13:43,121 [INFO] Skipping bill 1634779 - already processed (931/2605) 2025-12-01 13:13:43,121 [INFO] Skipping bill 1836873 - already processed (932/2605) 2025-12-01 13:13:43,121 [INFO] Skipping bill 1834678 - already processed (933/2605) 2025-12-01 13:13:43,121 [INFO] Skipping bill 1790707 - already processed (934/2605) 2025-12-01 13:13:43,121 [INFO] Skipping bill 1852775 - already processed (935/2605) 2025-12-01 13:13:43,122 [INFO] Skipping bill 1897040 - already processed (936/2605) 2025-12-01 13:13:43,122 [INFO] Skipping bill 1898466 - already processed (937/2605) 2025-12-01 13:13:43,122 [INFO] Skipping bill 1893847 - already processed (938/2605) 2025-12-01 13:13:43,122 [INFO] Skipping bill 1983834 - already processed (939/2605) 2025-12-01 13:13:43,122 [INFO] Skipping bill 1988287 - already processed (940/2605) 2025-12-01 13:13:43,122 [INFO] Skipping bill 1894415 - already processed (941/2605) 2025-12-01 13:13:43,122 [INFO] Skipping bill 1917533 - already processed (942/2605) 2025-12-01 13:13:43,123 [INFO] Skipping bill 1900966 - already processed (943/2605) 2025-12-01 13:13:43,123 [INFO] Skipping bill 1972401 - already processed (944/2605) 2025-12-01 13:13:43,123 [INFO] Skipping bill 1988699 - already processed (945/2605) 2025-12-01 13:13:43,123 [INFO] Skipping bill 1988844 - already processed (946/2605) 2025-12-01 13:13:43,123 [INFO] Skipping bill 1894126 - already processed (947/2605) 2025-12-01 13:13:43,123 [INFO] Skipping bill 1974757 - already processed (948/2605) 2025-12-01 13:13:43,123 [INFO] Skipping bill 1717719 - already processed (949/2605) 2025-12-01 13:13:43,123 [INFO] Skipping bill 1912107 - already processed (950/2605) 2025-12-01 13:13:43,124 [INFO] Skipping bill 1941091 - already processed (951/2605) 2025-12-01 13:13:43,124 [INFO] Skipping bill 1916250 - already processed (952/2605) 2025-12-01 13:13:43,124 [INFO] Skipping bill 1974033 - already processed (953/2605) 2025-12-01 13:13:43,124 [INFO] Skipping bill 1895954 - already processed (954/2605) 2025-12-01 13:13:43,124 [INFO] Skipping bill 1974042 - already processed (955/2605) 2025-12-01 13:13:43,124 [INFO] Skipping bill 1981849 - already processed (956/2605) 2025-12-01 13:13:43,124 [INFO] Skipping bill 1979780 - already processed (957/2605) 2025-12-01 13:13:43,125 [INFO] Skipping bill 1896111 - already processed (958/2605) 2025-12-01 13:13:43,125 [INFO] Skipping bill 1971592 - already processed (959/2605) 2025-12-01 13:13:43,125 [INFO] Skipping bill 1971640 - already processed (960/2605) 2025-12-01 13:13:43,125 [INFO] Skipping bill 1896588 - already processed (961/2605) 2025-12-01 13:13:43,125 [INFO] Skipping bill 1981663 - already processed (962/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1867796 - already processed (963/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1867828 - already processed (964/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1813907 - already processed (965/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1814493 - already processed (966/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1867439 - already processed (967/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1814241 - already processed (968/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1935238 - already processed (969/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1908945 - already processed (970/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1980982 - already processed (971/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1934094 - already processed (972/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1931194 - already processed (973/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1915534 - already processed (974/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1927914 - already processed (975/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1710815 - already processed (976/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1748189 - already processed (977/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1746365 - already processed (978/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1965229 - already processed (979/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1999738 - already processed (980/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1989648 - already processed (981/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1946188 - already processed (982/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1892638 - already processed (983/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1944647 - already processed (984/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1983017 - already processed (985/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1954626 - already processed (986/2605) 2025-12-01 13:13:43,126 [INFO] Skipping bill 1977147 - already processed (987/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 2013424 - already processed (988/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 2013451 - already processed (989/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1953001 - already processed (990/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1982880 - already processed (991/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1989793 - already processed (992/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1954479 - already processed (993/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 2031601 - already processed (994/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 2009433 - already processed (995/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1901514 - already processed (996/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1651925 - already processed (997/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1793373 - already processed (998/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1793039 - already processed (999/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1792971 - already processed (1000/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1793409 - already processed (1001/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1793958 - already processed (1002/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1793284 - already processed (1003/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1938552 - already processed (1004/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1922870 - already processed (1005/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1803710 - already processed (1006/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1889722 - already processed (1007/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1892083 - already processed (1008/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1889346 - already processed (1009/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1889719 - already processed (1010/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1889335 - already processed (1011/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1897572 - already processed (1012/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1887538 - already processed (1013/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1887101 - already processed (1014/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1888624 - already processed (1015/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1877673 - already processed (1016/2605) 2025-12-01 13:13:43,127 [INFO] Skipping bill 1897803 - already processed (1017/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1889758 - already processed (1018/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1897565 - already processed (1019/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1853521 - already processed (1020/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1864839 - already processed (1021/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1879513 - already processed (1022/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1878078 - already processed (1023/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2013662 - already processed (1024/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1897603 - already processed (1025/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1881186 - already processed (1026/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1983797 - already processed (1027/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2023789 - already processed (1028/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1878049 - already processed (1029/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2052496 - already processed (1030/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1807241 - already processed (1031/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1881870 - already processed (1032/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1881843 - already processed (1033/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2030230 - already processed (1034/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2022901 - already processed (1035/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1896879 - already processed (1036/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1889701 - already processed (1037/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1970250 - already processed (1038/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2037153 - already processed (1039/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2013635 - already processed (1040/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1883140 - already processed (1041/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1853367 - already processed (1042/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1801284 - already processed (1043/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1889518 - already processed (1044/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1888073 - already processed (1045/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2052173 - already processed (1046/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 2047520 - already processed (1047/2605) 2025-12-01 13:13:43,128 [INFO] Skipping bill 1889754 - already processed (1048/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1835303 - already processed (1049/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1949479 - already processed (1050/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 2022816 - already processed (1051/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1872559 - already processed (1052/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1875857 - already processed (1053/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1876467 - already processed (1054/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1876586 - already processed (1055/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 2038328 - already processed (1056/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1878887 - already processed (1057/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1853095 - already processed (1058/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1805407 - already processed (1059/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 2022907 - already processed (1060/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1949574 - already processed (1061/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1844841 - already processed (1062/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1864295 - already processed (1063/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1881176 - already processed (1064/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1837365 - already processed (1065/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1837180 - already processed (1066/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1887099 - already processed (1067/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 2028679 - already processed (1068/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 2030354 - already processed (1069/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1882474 - already processed (1070/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1964010 - already processed (1071/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 2008967 - already processed (1072/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1881178 - already processed (1073/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 2037324 - already processed (1074/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1806224 - already processed (1075/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1837135 - already processed (1076/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1805930 - already processed (1077/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1803406 - already processed (1078/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1883773 - already processed (1079/2605) 2025-12-01 13:13:43,129 [INFO] Skipping bill 1994137 - already processed (1080/2605) 2025-12-01 13:13:43,130 [INFO] Skipping bill 1881306 - already processed (1081/2605) 2025-12-01 13:13:43,130 [INFO] Skipping bill 1889726 - already processed (1082/2605) 2025-12-01 13:13:43,130 [INFO] Skipping bill 1889593 - already processed (1083/2605) 2025-12-01 13:13:43,130 [INFO] Processing 1084/2605: Bill ID 1883494 2025-12-01 13:13:43,955 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:43,957 [ERROR] Failed to generate report for bill 1883494: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:44,967 [INFO] Processing 1085/2605: Bill ID 1883535 2025-12-01 13:13:45,676 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:45,678 [ERROR] Failed to generate report for bill 1883535: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:46,689 [INFO] Processing 1086/2605: Bill ID 2038569 2025-12-01 13:13:48,046 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:48,049 [ERROR] Failed to generate report for bill 2038569: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:49,060 [INFO] Processing 1087/2605: Bill ID 2038571 2025-12-01 13:13:49,888 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:49,890 [ERROR] Failed to generate report for bill 2038571: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:50,900 [INFO] Skipping bill 1666814 - already processed (1088/2605) 2025-12-01 13:13:50,901 [INFO] Skipping bill 1722011 - already processed (1089/2605) 2025-12-01 13:13:50,901 [INFO] Skipping bill 1724398 - already processed (1090/2605) 2025-12-01 13:13:50,901 [INFO] Skipping bill 1676083 - already processed (1091/2605) 2025-12-01 13:13:50,901 [INFO] Skipping bill 1824011 - already processed (1092/2605) 2025-12-01 13:13:50,902 [INFO] Skipping bill 1824228 - already processed (1093/2605) 2025-12-01 13:13:50,902 [INFO] Skipping bill 1824028 - already processed (1094/2605) 2025-12-01 13:13:50,902 [INFO] Skipping bill 1834441 - already processed (1095/2605) 2025-12-01 13:13:50,902 [INFO] Skipping bill 1908238 - already processed (1096/2605) 2025-12-01 13:13:50,902 [INFO] Skipping bill 1967640 - already processed (1097/2605) 2025-12-01 13:13:50,902 [INFO] Skipping bill 1935448 - already processed (1098/2605) 2025-12-01 13:13:50,902 [INFO] Skipping bill 1987611 - already processed (1099/2605) 2025-12-01 13:13:50,902 [INFO] Skipping bill 1964156 - already processed (1100/2605) 2025-12-01 13:13:50,903 [INFO] Skipping bill 1947221 - already processed (1101/2605) 2025-12-01 13:13:50,903 [INFO] Skipping bill 1943110 - already processed (1102/2605) 2025-12-01 13:13:50,903 [INFO] Skipping bill 1964415 - already processed (1103/2605) 2025-12-01 13:13:50,903 [INFO] Skipping bill 1996731 - already processed (1104/2605) 2025-12-01 13:13:50,903 [INFO] Skipping bill 1944685 - already processed (1105/2605) 2025-12-01 13:13:50,903 [INFO] Skipping bill 1936020 - already processed (1106/2605) 2025-12-01 13:13:50,903 [INFO] Skipping bill 1947285 - already processed (1107/2605) 2025-12-01 13:13:50,904 [INFO] Skipping bill 1949498 - already processed (1108/2605) 2025-12-01 13:13:50,904 [INFO] Skipping bill 1933085 - already processed (1109/2605) 2025-12-01 13:13:50,904 [INFO] Skipping bill 1881403 - already processed (1110/2605) 2025-12-01 13:13:50,904 [INFO] Skipping bill 1878440 - already processed (1111/2605) 2025-12-01 13:13:50,904 [INFO] Skipping bill 1874641 - already processed (1112/2605) 2025-12-01 13:13:50,904 [INFO] Skipping bill 1780447 - already processed (1113/2605) 2025-12-01 13:13:50,904 [INFO] Skipping bill 1829313 - already processed (1114/2605) 2025-12-01 13:13:50,905 [INFO] Skipping bill 1876168 - already processed (1115/2605) 2025-12-01 13:13:50,905 [INFO] Skipping bill 1878357 - already processed (1116/2605) 2025-12-01 13:13:50,905 [INFO] Skipping bill 1801087 - already processed (1117/2605) 2025-12-01 13:13:50,905 [INFO] Skipping bill 1878533 - already processed (1118/2605) 2025-12-01 13:13:50,905 [INFO] Skipping bill 1781971 - already processed (1119/2605) 2025-12-01 13:13:50,905 [INFO] Skipping bill 1836944 - already processed (1120/2605) 2025-12-01 13:13:50,905 [INFO] Skipping bill 1773855 - already processed (1121/2605) 2025-12-01 13:13:50,905 [INFO] Skipping bill 1774758 - already processed (1122/2605) 2025-12-01 13:13:50,906 [INFO] Skipping bill 1779189 - already processed (1123/2605) 2025-12-01 13:13:50,906 [INFO] Skipping bill 1780403 - already processed (1124/2605) 2025-12-01 13:13:50,906 [INFO] Skipping bill 1882902 - already processed (1125/2605) 2025-12-01 13:13:50,906 [INFO] Skipping bill 1761023 - already processed (1126/2605) 2025-12-01 13:13:50,906 [INFO] Skipping bill 1763282 - already processed (1127/2605) 2025-12-01 13:13:50,906 [INFO] Skipping bill 1756406 - already processed (1128/2605) 2025-12-01 13:13:50,906 [INFO] Skipping bill 1721336 - already processed (1129/2605) 2025-12-01 13:13:50,907 [INFO] Skipping bill 1865663 - already processed (1130/2605) 2025-12-01 13:13:50,907 [INFO] Skipping bill 1884682 - already processed (1131/2605) 2025-12-01 13:13:50,907 [INFO] Skipping bill 1879124 - already processed (1132/2605) 2025-12-01 13:13:50,907 [INFO] Skipping bill 1813023 - already processed (1133/2605) 2025-12-01 13:13:50,907 [INFO] Skipping bill 1780572 - already processed (1134/2605) 2025-12-01 13:13:50,907 [INFO] Skipping bill 1796023 - already processed (1135/2605) 2025-12-01 13:13:50,907 [INFO] Skipping bill 1796213 - already processed (1136/2605) 2025-12-01 13:13:50,908 [INFO] Skipping bill 1841005 - already processed (1137/2605) 2025-12-01 13:13:50,908 [INFO] Skipping bill 1861287 - already processed (1138/2605) 2025-12-01 13:13:50,908 [INFO] Skipping bill 1878752 - already processed (1139/2605) 2025-12-01 13:13:50,908 [INFO] Skipping bill 1813101 - already processed (1140/2605) 2025-12-01 13:13:50,908 [INFO] Skipping bill 1768635 - already processed (1141/2605) 2025-12-01 13:13:50,908 [INFO] Skipping bill 1767924 - already processed (1142/2605) 2025-12-01 13:13:50,908 [INFO] Skipping bill 1641754 - already processed (1143/2605) 2025-12-01 13:13:50,908 [INFO] Skipping bill 1882889 - already processed (1144/2605) 2025-12-01 13:13:50,909 [INFO] Skipping bill 1729291 - already processed (1145/2605) 2025-12-01 13:13:50,909 [INFO] Skipping bill 1773906 - already processed (1146/2605) 2025-12-01 13:13:50,909 [INFO] Skipping bill 1839957 - already processed (1147/2605) 2025-12-01 13:13:50,909 [INFO] Skipping bill 1843965 - already processed (1148/2605) 2025-12-01 13:13:50,909 [INFO] Skipping bill 1879710 - already processed (1149/2605) 2025-12-01 13:13:50,909 [INFO] Skipping bill 1763606 - already processed (1150/2605) 2025-12-01 13:13:50,909 [INFO] Skipping bill 1780432 - already processed (1151/2605) 2025-12-01 13:13:50,910 [INFO] Skipping bill 1812765 - already processed (1152/2605) 2025-12-01 13:13:50,910 [INFO] Skipping bill 1836858 - already processed (1153/2605) 2025-12-01 13:13:50,910 [INFO] Skipping bill 1864293 - already processed (1154/2605) 2025-12-01 13:13:50,910 [INFO] Skipping bill 1770114 - already processed (1155/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1733127 - already processed (1156/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1762026 - already processed (1157/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1829537 - already processed (1158/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1878142 - already processed (1159/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1880765 - already processed (1160/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1762041 - already processed (1161/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1646230 - already processed (1162/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1762213 - already processed (1163/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1779393 - already processed (1164/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1878544 - already processed (1165/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1780459 - already processed (1166/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1781963 - already processed (1167/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1758293 - already processed (1168/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1768495 - already processed (1169/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1773860 - already processed (1170/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1864226 - already processed (1171/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1878400 - already processed (1172/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1879652 - already processed (1173/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1865798 - already processed (1174/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1862795 - already processed (1175/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1710243 - already processed (1176/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1818495 - already processed (1177/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1775864 - already processed (1178/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1856196 - already processed (1179/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1791835 - already processed (1180/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1658709 - already processed (1181/2605) 2025-12-01 13:13:50,911 [INFO] Skipping bill 1695187 - already processed (1182/2605) 2025-12-01 13:13:50,911 [INFO] Processing 1183/2605: Bill ID 1818780 2025-12-01 13:13:51,354 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:51,356 [ERROR] Failed to generate report for bill 1818780: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:52,366 [INFO] Processing 1184/2605: Bill ID 1818766 2025-12-01 13:13:52,960 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:52,963 [ERROR] Failed to generate report for bill 1818766: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:53,974 [INFO] Skipping bill 1752559 - already processed (1185/2605) 2025-12-01 13:13:53,974 [INFO] Skipping bill 1882942 - already processed (1186/2605) 2025-12-01 13:13:53,975 [INFO] Skipping bill 1766908 - already processed (1187/2605) 2025-12-01 13:13:53,975 [INFO] Skipping bill 1691064 - already processed (1188/2605) 2025-12-01 13:13:53,975 [INFO] Processing 1189/2605: Bill ID 1690030 2025-12-01 13:13:55,624 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:55,627 [ERROR] Failed to generate report for bill 1690030: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:56,638 [INFO] Processing 1190/2605: Bill ID 1690727 2025-12-01 13:13:58,286 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:13:58,289 [ERROR] Failed to generate report for bill 1690727: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:13:58,349 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:13:58,350 [INFO] Progress: 1190/2605 - Processed: 0, Skipped: 1139, Errors: 51 2025-12-01 13:13:59,355 [INFO] Processing 1191/2605: Bill ID 1875409 2025-12-01 13:14:02,692 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:02,693 [ERROR] Failed to generate report for bill 1875409: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:03,703 [INFO] Processing 1192/2605: Bill ID 1835820 2025-12-01 13:14:07,402 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:07,403 [ERROR] Failed to generate report for bill 1835820: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:08,413 [INFO] Processing 1193/2605: Bill ID 1818459 2025-12-01 13:14:11,149 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:11,151 [ERROR] Failed to generate report for bill 1818459: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:12,161 [INFO] Skipping bill 2009915 - already processed (1194/2605) 2025-12-01 13:14:12,162 [INFO] Skipping bill 1917775 - already processed (1195/2605) 2025-12-01 13:14:12,162 [INFO] Skipping bill 1902981 - already processed (1196/2605) 2025-12-01 13:14:12,162 [INFO] Skipping bill 1908626 - already processed (1197/2605) 2025-12-01 13:14:12,163 [INFO] Skipping bill 1903647 - already processed (1198/2605) 2025-12-01 13:14:12,163 [INFO] Skipping bill 1993863 - already processed (1199/2605) 2025-12-01 13:14:12,163 [INFO] Skipping bill 2015656 - already processed (1200/2605) 2025-12-01 13:14:12,163 [INFO] Skipping bill 1909120 - already processed (1201/2605) 2025-12-01 13:14:12,163 [INFO] Skipping bill 2032707 - already processed (1202/2605) 2025-12-01 13:14:12,163 [INFO] Skipping bill 2030838 - already processed (1203/2605) 2025-12-01 13:14:12,163 [INFO] Skipping bill 2033110 - already processed (1204/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 1992712 - already processed (1205/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 2010112 - already processed (1206/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 2035218 - already processed (1207/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 1970759 - already processed (1208/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 1917262 - already processed (1209/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 2015645 - already processed (1210/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 1941920 - already processed (1211/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 2041695 - already processed (1212/2605) 2025-12-01 13:14:12,164 [INFO] Skipping bill 2038940 - already processed (1213/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 2043998 - already processed (1214/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 1903496 - already processed (1215/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 1942114 - already processed (1216/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 1948978 - already processed (1217/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 2025948 - already processed (1218/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 2030449 - already processed (1219/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 2012463 - already processed (1220/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 2036382 - already processed (1221/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 1901571 - already processed (1222/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 1902589 - already processed (1223/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 2045075 - already processed (1224/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 2042397 - already processed (1225/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 2005892 - already processed (1226/2605) 2025-12-01 13:14:12,165 [INFO] Skipping bill 1995988 - already processed (1227/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1941987 - already processed (1228/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 2051432 - already processed (1229/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 2030765 - already processed (1230/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1900450 - already processed (1231/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 2032658 - already processed (1232/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1934862 - already processed (1233/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1954914 - already processed (1234/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1908970 - already processed (1235/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 2046810 - already processed (1236/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1911503 - already processed (1237/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1917449 - already processed (1238/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 2012421 - already processed (1239/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 2036409 - already processed (1240/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1930912 - already processed (1241/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 2015571 - already processed (1242/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1991849 - already processed (1243/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1909237 - already processed (1244/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 1907396 - already processed (1245/2605) 2025-12-01 13:14:12,166 [INFO] Skipping bill 2032681 - already processed (1246/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2031449 - already processed (1247/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2036417 - already processed (1248/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2010242 - already processed (1249/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 1902485 - already processed (1250/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2044029 - already processed (1251/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2039479 - already processed (1252/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 1993679 - already processed (1253/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 1927014 - already processed (1254/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2053531 - already processed (1255/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2012390 - already processed (1256/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2051443 - already processed (1257/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 1967476 - already processed (1258/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2039584 - already processed (1259/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 1941925 - already processed (1260/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2039602 - already processed (1261/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2021091 - already processed (1262/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 2053730 - already processed (1263/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 1993748 - already processed (1264/2605) 2025-12-01 13:14:12,167 [INFO] Skipping bill 1907408 - already processed (1265/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 2043429 - already processed (1266/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 2036445 - already processed (1267/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1948575 - already processed (1268/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 2020539 - already processed (1269/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1941981 - already processed (1270/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1985057 - already processed (1271/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 2012554 - already processed (1272/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1900469 - already processed (1273/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1949091 - already processed (1274/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1903302 - already processed (1275/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 2031820 - already processed (1276/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1986509 - already processed (1277/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1992147 - already processed (1278/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1908565 - already processed (1279/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 2018195 - already processed (1280/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1948655 - already processed (1281/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1926957 - already processed (1282/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 2007650 - already processed (1283/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1938062 - already processed (1284/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1909167 - already processed (1285/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1910683 - already processed (1286/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1918276 - already processed (1287/2605) 2025-12-01 13:14:12,168 [INFO] Skipping bill 1942634 - already processed (1288/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1947885 - already processed (1289/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 2034828 - already processed (1290/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 2035534 - already processed (1291/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1937370 - already processed (1292/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 2036328 - already processed (1293/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1940048 - already processed (1294/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1990212 - already processed (1295/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1995017 - already processed (1296/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1937257 - already processed (1297/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1900853 - already processed (1298/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1947971 - already processed (1299/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1920984 - already processed (1300/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1902725 - already processed (1301/2605) 2025-12-01 13:14:12,169 [INFO] Skipping bill 1964016 - already processed (1302/2605) 2025-12-01 13:14:12,169 [INFO] Processing 1303/2605: Bill ID 1934576 2025-12-01 13:14:12,723 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:12,724 [ERROR] Failed to generate report for bill 1934576: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:13,735 [INFO] Skipping bill 1898800 - already processed (1304/2605) 2025-12-01 13:14:13,737 [INFO] Skipping bill 1971511 - already processed (1305/2605) 2025-12-01 13:14:13,737 [INFO] Processing 1306/2605: Bill ID 1935197 2025-12-01 13:14:14,286 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:14,287 [ERROR] Failed to generate report for bill 1935197: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:15,297 [INFO] Processing 1307/2605: Bill ID 1935040 2025-12-01 13:14:15,809 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:15,811 [ERROR] Failed to generate report for bill 1935040: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:16,822 [INFO] Skipping bill 1948521 - already processed (1308/2605) 2025-12-01 13:14:16,823 [INFO] Skipping bill 1977652 - already processed (1309/2605) 2025-12-01 13:14:16,823 [INFO] Processing 1310/2605: Bill ID 1934805 2025-12-01 13:14:17,385 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:17,387 [ERROR] Failed to generate report for bill 1934805: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:17,438 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:14:17,439 [INFO] Progress: 1310/2605 - Processed: 0, Skipped: 1252, Errors: 58 2025-12-01 13:14:18,444 [INFO] Skipping bill 1934970 - already processed (1311/2605) 2025-12-01 13:14:18,445 [INFO] Skipping bill 1934701 - already processed (1312/2605) 2025-12-01 13:14:18,445 [INFO] Skipping bill 1942260 - already processed (1313/2605) 2025-12-01 13:14:18,445 [INFO] Skipping bill 1917391 - already processed (1314/2605) 2025-12-01 13:14:18,445 [INFO] Processing 1315/2605: Bill ID 1935190 2025-12-01 13:14:21,389 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:21,391 [ERROR] Failed to generate report for bill 1935190: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:22,404 [INFO] Processing 1316/2605: Bill ID 1934636 2025-12-01 13:14:24,151 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:24,152 [ERROR] Failed to generate report for bill 1934636: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:25,162 [INFO] Processing 1317/2605: Bill ID 1935223 2025-12-01 13:14:26,929 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:26,930 [ERROR] Failed to generate report for bill 1935223: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:27,936 [INFO] Processing 1318/2605: Bill ID 1934824 2025-12-01 13:14:30,878 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:30,880 [ERROR] Failed to generate report for bill 1934824: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:31,892 [INFO] Processing 1319/2605: Bill ID 2052596 2025-12-01 13:14:35,665 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:35,666 [ERROR] Failed to generate report for bill 2052596: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:36,677 [INFO] Skipping bill 1879932 - already processed (1320/2605) 2025-12-01 13:14:36,678 [INFO] Skipping bill 1875738 - already processed (1321/2605) 2025-12-01 13:14:36,678 [INFO] Skipping bill 1875815 - already processed (1322/2605) 2025-12-01 13:14:36,679 [INFO] Skipping bill 1701253 - already processed (1323/2605) 2025-12-01 13:14:36,679 [INFO] Skipping bill 1875615 - already processed (1324/2605) 2025-12-01 13:14:36,679 [INFO] Skipping bill 1754315 - already processed (1325/2605) 2025-12-01 13:14:36,679 [INFO] Skipping bill 1751005 - already processed (1326/2605) 2025-12-01 13:14:36,679 [INFO] Skipping bill 1875642 - already processed (1327/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 1753811 - already processed (1328/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 1752050 - already processed (1329/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 1704591 - already processed (1330/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 1748551 - already processed (1331/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 1725321 - already processed (1332/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 1725195 - already processed (1333/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 2014434 - already processed (1334/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 2014277 - already processed (1335/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 2000124 - already processed (1336/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 2022736 - already processed (1337/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 2022881 - already processed (1338/2605) 2025-12-01 13:14:36,680 [INFO] Skipping bill 2014322 - already processed (1339/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2014068 - already processed (1340/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2005730 - already processed (1341/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2014594 - already processed (1342/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2013131 - already processed (1343/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2022220 - already processed (1344/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2008986 - already processed (1345/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2013796 - already processed (1346/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2014312 - already processed (1347/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2013903 - already processed (1348/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2013936 - already processed (1349/2605) 2025-12-01 13:14:36,681 [INFO] Skipping bill 2013868 - already processed (1350/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2014024 - already processed (1351/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2014377 - already processed (1352/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2017695 - already processed (1353/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2018632 - already processed (1354/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2022666 - already processed (1355/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2022828 - already processed (1356/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2015551 - already processed (1357/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2009244 - already processed (1358/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 1969116 - already processed (1359/2605) 2025-12-01 13:14:36,682 [INFO] Skipping bill 2009761 - already processed (1360/2605) 2025-12-01 13:14:36,682 [INFO] Processing 1361/2605: Bill ID 2012916 2025-12-01 13:14:37,197 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:37,198 [ERROR] Failed to generate report for bill 2012916: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:38,208 [INFO] Skipping bill 1996111 - already processed (1362/2605) 2025-12-01 13:14:38,209 [INFO] Skipping bill 1656324 - already processed (1363/2605) 2025-12-01 13:14:38,209 [INFO] Skipping bill 1640560 - already processed (1364/2605) 2025-12-01 13:14:38,209 [INFO] Skipping bill 1644790 - already processed (1365/2605) 2025-12-01 13:14:38,209 [INFO] Skipping bill 1908973 - already processed (1366/2605) 2025-12-01 13:14:38,209 [INFO] Skipping bill 1930471 - already processed (1367/2605) 2025-12-01 13:14:38,209 [INFO] Skipping bill 1916131 - already processed (1368/2605) 2025-12-01 13:14:38,209 [INFO] Skipping bill 1916897 - already processed (1369/2605) 2025-12-01 13:14:38,209 [INFO] Skipping bill 1930219 - already processed (1370/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1916725 - already processed (1371/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1916697 - already processed (1372/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1921549 - already processed (1373/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1916032 - already processed (1374/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1915939 - already processed (1375/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1899315 - already processed (1376/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1930747 - already processed (1377/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1898936 - already processed (1378/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1828241 - already processed (1379/2605) 2025-12-01 13:14:38,210 [INFO] Skipping bill 1784887 - already processed (1380/2605) 2025-12-01 13:14:38,210 [INFO] Processing 1381/2605: Bill ID 1710984 2025-12-01 13:14:43,655 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:43,657 [ERROR] Failed to generate report for bill 1710984: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:44,667 [INFO] Processing 1382/2605: Bill ID 1710996 2025-12-01 13:14:47,339 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:47,341 [ERROR] Failed to generate report for bill 1710996: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:48,353 [INFO] Processing 1383/2605: Bill ID 1659671 2025-12-01 13:14:51,339 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:51,341 [ERROR] Failed to generate report for bill 1659671: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:52,352 [INFO] Skipping bill 2046561 - already processed (1384/2605) 2025-12-01 13:14:52,353 [INFO] Skipping bill 2018937 - already processed (1385/2605) 2025-12-01 13:14:52,354 [INFO] Skipping bill 2046538 - already processed (1386/2605) 2025-12-01 13:14:52,354 [INFO] Skipping bill 2038933 - already processed (1387/2605) 2025-12-01 13:14:52,355 [INFO] Skipping bill 2019064 - already processed (1388/2605) 2025-12-01 13:14:52,355 [INFO] Skipping bill 2051853 - already processed (1389/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 1973495 - already processed (1390/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 2044900 - already processed (1391/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 2036911 - already processed (1392/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 1956347 - already processed (1393/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 2015680 - already processed (1394/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 2035837 - already processed (1395/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 2052361 - already processed (1396/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 2053186 - already processed (1397/2605) 2025-12-01 13:14:52,356 [INFO] Skipping bill 1956501 - already processed (1398/2605) 2025-12-01 13:14:52,357 [INFO] Processing 1399/2605: Bill ID 1966320 2025-12-01 13:14:57,168 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:57,170 [ERROR] Failed to generate report for bill 1966320: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:58,182 [INFO] Processing 1400/2605: Bill ID 2044413 2025-12-01 13:14:59,011 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:14:59,013 [ERROR] Failed to generate report for bill 2044413: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281182 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281182 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:14:59,071 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:14:59,071 [INFO] Progress: 1400/2605 - Processed: 0, Skipped: 1331, Errors: 69 2025-12-01 13:15:00,076 [INFO] Processing 1401/2605: Bill ID 2031116 2025-12-01 13:15:01,118 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:01,120 [ERROR] Failed to generate report for bill 2031116: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:02,129 [INFO] Skipping bill 1820171 - already processed (1402/2605) 2025-12-01 13:15:02,132 [INFO] Skipping bill 1820684 - already processed (1403/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1820075 - already processed (1404/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1820478 - already processed (1405/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1820697 - already processed (1406/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1821348 - already processed (1407/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1819421 - already processed (1408/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1820795 - already processed (1409/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1814318 - already processed (1410/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1814441 - already processed (1411/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1791289 - already processed (1412/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1789468 - already processed (1413/2605) 2025-12-01 13:15:02,133 [INFO] Skipping bill 1924199 - already processed (1414/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1920208 - already processed (1415/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1920320 - already processed (1416/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1923586 - already processed (1417/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1918327 - already processed (1418/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1922702 - already processed (1419/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1923122 - already processed (1420/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1924269 - already processed (1421/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1925220 - already processed (1422/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1924640 - already processed (1423/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1924912 - already processed (1424/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1900252 - already processed (1425/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 2018241 - already processed (1426/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1920876 - already processed (1427/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1920720 - already processed (1428/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1925546 - already processed (1429/2605) 2025-12-01 13:15:02,134 [INFO] Skipping bill 1903378 - already processed (1430/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1921990 - already processed (1431/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1922805 - already processed (1432/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1922842 - already processed (1433/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1836006 - already processed (1434/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1836109 - already processed (1435/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1843504 - already processed (1436/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1973003 - already processed (1437/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 2009609 - already processed (1438/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1986214 - already processed (1439/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1912749 - already processed (1440/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1914095 - already processed (1441/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1914598 - already processed (1442/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1913104 - already processed (1443/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1914569 - already processed (1444/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1930373 - already processed (1445/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1982090 - already processed (1446/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1914274 - already processed (1447/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1982120 - already processed (1448/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1773806 - already processed (1449/2605) 2025-12-01 13:15:02,135 [INFO] Skipping bill 1880673 - already processed (1450/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1724997 - already processed (1451/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1775230 - already processed (1452/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1889846 - already processed (1453/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1773451 - already processed (1454/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1759469 - already processed (1455/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1777407 - already processed (1456/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1880554 - already processed (1457/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1854268 - already processed (1458/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1771135 - already processed (1459/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1830478 - already processed (1460/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1780085 - already processed (1461/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1858003 - already processed (1462/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1880735 - already processed (1463/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1882950 - already processed (1464/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1878925 - already processed (1465/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1878252 - already processed (1466/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1884263 - already processed (1467/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1873862 - already processed (1468/2605) 2025-12-01 13:15:02,136 [INFO] Skipping bill 1882265 - already processed (1469/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1771247 - already processed (1470/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1836612 - already processed (1471/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1820748 - already processed (1472/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1886418 - already processed (1473/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1769931 - already processed (1474/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1740020 - already processed (1475/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1878961 - already processed (1476/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1768592 - already processed (1477/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2045757 - already processed (1478/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2030536 - already processed (1479/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2047301 - already processed (1480/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2039357 - already processed (1481/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2034685 - already processed (1482/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2037642 - already processed (1483/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2022168 - already processed (1484/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2052644 - already processed (1485/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 2051282 - already processed (1486/2605) 2025-12-01 13:15:02,137 [INFO] Skipping bill 1937863 - already processed (1487/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2043639 - already processed (1488/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2012593 - already processed (1489/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 1991206 - already processed (1490/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 1947924 - already processed (1491/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2012408 - already processed (1492/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2021116 - already processed (1493/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 1973751 - already processed (1494/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2045246 - already processed (1495/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 1910852 - already processed (1496/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 1956391 - already processed (1497/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2023404 - already processed (1498/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2035307 - already processed (1499/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 1944456 - already processed (1500/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2041064 - already processed (1501/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2039278 - already processed (1502/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2041823 - already processed (1503/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 1946034 - already processed (1504/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 2038442 - already processed (1505/2605) 2025-12-01 13:15:02,138 [INFO] Skipping bill 1905925 - already processed (1506/2605) 2025-12-01 13:15:02,139 [INFO] Processing 1507/2605: Bill ID 2041076 2025-12-01 13:15:02,694 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:02,696 [ERROR] Failed to generate report for bill 2041076: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:03,705 [INFO] Processing 1508/2605: Bill ID 2037948 2025-12-01 13:15:04,168 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:04,171 [ERROR] Failed to generate report for bill 2037948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:05,181 [INFO] Skipping bill 1757100 - already processed (1509/2605) 2025-12-01 13:15:05,182 [INFO] Skipping bill 1766918 - already processed (1510/2605) 2025-12-01 13:15:05,182 [INFO] Skipping bill 1691606 - already processed (1511/2605) 2025-12-01 13:15:05,182 [INFO] Skipping bill 1757087 - already processed (1512/2605) 2025-12-01 13:15:05,183 [INFO] Skipping bill 1691984 - already processed (1513/2605) 2025-12-01 13:15:05,183 [INFO] Skipping bill 1724146 - already processed (1514/2605) 2025-12-01 13:15:05,183 [INFO] Skipping bill 1811367 - already processed (1515/2605) 2025-12-01 13:15:05,183 [INFO] Skipping bill 1864559 - already processed (1516/2605) 2025-12-01 13:15:05,183 [INFO] Skipping bill 1833383 - already processed (1517/2605) 2025-12-01 13:15:05,183 [INFO] Skipping bill 1839979 - already processed (1518/2605) 2025-12-01 13:15:05,183 [INFO] Skipping bill 1863636 - already processed (1519/2605) 2025-12-01 13:15:05,184 [INFO] Skipping bill 1866932 - already processed (1520/2605) 2025-12-01 13:15:05,184 [INFO] Skipping bill 1829566 - already processed (1521/2605) 2025-12-01 13:15:05,184 [INFO] Skipping bill 1858179 - already processed (1522/2605) 2025-12-01 13:15:05,184 [INFO] Skipping bill 1857154 - already processed (1523/2605) 2025-12-01 13:15:05,184 [INFO] Skipping bill 1866872 - already processed (1524/2605) 2025-12-01 13:15:05,184 [INFO] Skipping bill 1844272 - already processed (1525/2605) 2025-12-01 13:15:05,184 [INFO] Skipping bill 1875576 - already processed (1526/2605) 2025-12-01 13:15:05,185 [INFO] Skipping bill 1875933 - already processed (1527/2605) 2025-12-01 13:15:05,185 [INFO] Skipping bill 1844730 - already processed (1528/2605) 2025-12-01 13:15:05,185 [INFO] Skipping bill 1858971 - already processed (1529/2605) 2025-12-01 13:15:05,185 [INFO] Skipping bill 1870027 - already processed (1530/2605) 2025-12-01 13:15:05,185 [INFO] Skipping bill 1994761 - already processed (1531/2605) 2025-12-01 13:15:05,185 [INFO] Skipping bill 1935080 - already processed (1532/2605) 2025-12-01 13:15:05,185 [INFO] Skipping bill 1945535 - already processed (1533/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1979504 - already processed (1534/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1937835 - already processed (1535/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1918971 - already processed (1536/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1986390 - already processed (1537/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1945988 - already processed (1538/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1940828 - already processed (1539/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1986602 - already processed (1540/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1988979 - already processed (1541/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 2008057 - already processed (1542/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1986556 - already processed (1543/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1986569 - already processed (1544/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1988788 - already processed (1545/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 2028551 - already processed (1546/2605) 2025-12-01 13:15:05,186 [INFO] Skipping bill 1937524 - already processed (1547/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 1966994 - already processed (1548/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 2030023 - already processed (1549/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 1988713 - already processed (1550/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 1988914 - already processed (1551/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 2030055 - already processed (1552/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 1666116 - already processed (1553/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 1792231 - already processed (1554/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 1802681 - already processed (1555/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 1921522 - already processed (1556/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 1999928 - already processed (1557/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 2022730 - already processed (1558/2605) 2025-12-01 13:15:05,187 [INFO] Skipping bill 2024009 - already processed (1559/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1895318 - already processed (1560/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1944028 - already processed (1561/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1954350 - already processed (1562/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1954733 - already processed (1563/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 2029172 - already processed (1564/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1944096 - already processed (1565/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1895182 - already processed (1566/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1919972 - already processed (1567/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1895637 - already processed (1568/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1819620 - already processed (1569/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1811138 - already processed (1570/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1948251 - already processed (1571/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1901594 - already processed (1572/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1833554 - already processed (1573/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1833050 - already processed (1574/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1830912 - already processed (1575/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1834207 - already processed (1576/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1795187 - already processed (1577/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1828458 - already processed (1578/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1808304 - already processed (1579/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1834240 - already processed (1580/2605) 2025-12-01 13:15:05,188 [INFO] Skipping bill 1831671 - already processed (1581/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1832378 - already processed (1582/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1828742 - already processed (1583/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1833429 - already processed (1584/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1828784 - already processed (1585/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1825620 - already processed (1586/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1799785 - already processed (1587/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1832466 - already processed (1588/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1831669 - already processed (1589/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1832147 - already processed (1590/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1831971 - already processed (1591/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1832437 - already processed (1592/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1828244 - already processed (1593/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1833731 - already processed (1594/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1833264 - already processed (1595/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1833393 - already processed (1596/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1825869 - already processed (1597/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1825916 - already processed (1598/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1873399 - already processed (1599/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1826595 - already processed (1600/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1832185 - already processed (1601/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1832434 - already processed (1602/2605) 2025-12-01 13:15:05,189 [INFO] Skipping bill 1831535 - already processed (1603/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1834179 - already processed (1604/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1834106 - already processed (1605/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1946381 - already processed (1606/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1953992 - already processed (1607/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1948149 - already processed (1608/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1959470 - already processed (1609/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1946783 - already processed (1610/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1955110 - already processed (1611/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1959302 - already processed (1612/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1959458 - already processed (1613/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1960722 - already processed (1614/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1951003 - already processed (1615/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1954702 - already processed (1616/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1954311 - already processed (1617/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1959312 - already processed (1618/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1959377 - already processed (1619/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1954015 - already processed (1620/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1954357 - already processed (1621/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1944274 - already processed (1622/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1944487 - already processed (1623/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1959723 - already processed (1624/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1960832 - already processed (1625/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1971015 - already processed (1626/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1971366 - already processed (1627/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1733375 - already processed (1628/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1700527 - already processed (1629/2605) 2025-12-01 13:15:05,190 [INFO] Skipping bill 1719413 - already processed (1630/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1694457 - already processed (1631/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1744060 - already processed (1632/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1727826 - already processed (1633/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1743424 - already processed (1634/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1732248 - already processed (1635/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1731629 - already processed (1636/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1769317 - already processed (1637/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1747471 - already processed (1638/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1747557 - already processed (1639/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1710763 - already processed (1640/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1782999 - already processed (1641/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1781207 - already processed (1642/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1726065 - already processed (1643/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1898826 - already processed (1644/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1992725 - already processed (1645/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1988473 - already processed (1646/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1970030 - already processed (1647/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 2007109 - already processed (1648/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1891805 - already processed (1649/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1949957 - already processed (1650/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1990181 - already processed (1651/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1991711 - already processed (1652/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1897779 - already processed (1653/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 2006851 - already processed (1654/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1975361 - already processed (1655/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 1987235 - already processed (1656/2605) 2025-12-01 13:15:05,191 [INFO] Skipping bill 2007736 - already processed (1657/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2000200 - already processed (1658/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1923991 - already processed (1659/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1892858 - already processed (1660/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2000248 - already processed (1661/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1971072 - already processed (1662/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2008077 - already processed (1663/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1907668 - already processed (1664/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1962916 - already processed (1665/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2005286 - already processed (1666/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2005181 - already processed (1667/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1891063 - already processed (1668/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1900186 - already processed (1669/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1994657 - already processed (1670/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2008307 - already processed (1671/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1991260 - already processed (1672/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2006384 - already processed (1673/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2002051 - already processed (1674/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1973236 - already processed (1675/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2007316 - already processed (1676/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1890894 - already processed (1677/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2000178 - already processed (1678/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1982970 - already processed (1679/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2006497 - already processed (1680/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1890775 - already processed (1681/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1892224 - already processed (1682/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 1954141 - already processed (1683/2605) 2025-12-01 13:15:05,192 [INFO] Skipping bill 2006579 - already processed (1684/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2006128 - already processed (1685/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2024097 - already processed (1686/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2034878 - already processed (1687/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1891396 - already processed (1688/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2040103 - already processed (1689/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2041986 - already processed (1690/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1987712 - already processed (1691/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2005998 - already processed (1692/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2008318 - already processed (1693/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1892843 - already processed (1694/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1946392 - already processed (1695/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1971169 - already processed (1696/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1890786 - already processed (1697/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1891256 - already processed (1698/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1942882 - already processed (1699/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2031981 - already processed (1700/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2033602 - already processed (1701/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2034279 - already processed (1702/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1974704 - already processed (1703/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1950849 - already processed (1704/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1975022 - already processed (1705/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1981850 - already processed (1706/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1890492 - already processed (1707/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2020803 - already processed (1708/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 2005343 - already processed (1709/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1890466 - already processed (1710/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1975612 - already processed (1711/2605) 2025-12-01 13:15:05,193 [INFO] Skipping bill 1994176 - already processed (1712/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1990550 - already processed (1713/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1891411 - already processed (1714/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1983542 - already processed (1715/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1999872 - already processed (1716/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 2007449 - already processed (1717/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 2039972 - already processed (1718/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1892428 - already processed (1719/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1891501 - already processed (1720/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 2007840 - already processed (1721/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1976041 - already processed (1722/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1992763 - already processed (1723/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1993770 - already processed (1724/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 2007872 - already processed (1725/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1936766 - already processed (1726/2605) 2025-12-01 13:15:05,194 [INFO] Skipping bill 1676049 - already processed (1727/2605) 2025-12-01 13:15:05,194 [INFO] Processing 1728/2605: Bill ID 1704512 2025-12-01 13:15:05,769 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:05,770 [ERROR] Failed to generate report for bill 1704512: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:06,778 [INFO] Skipping bill 1828750 - already processed (1729/2605) 2025-12-01 13:15:06,779 [INFO] Skipping bill 1823594 - already processed (1730/2605) 2025-12-01 13:15:06,779 [INFO] Skipping bill 1820331 - already processed (1731/2605) 2025-12-01 13:15:06,779 [INFO] Skipping bill 1810219 - already processed (1732/2605) 2025-12-01 13:15:06,779 [INFO] Skipping bill 1813477 - already processed (1733/2605) 2025-12-01 13:15:06,779 [INFO] Skipping bill 1858814 - already processed (1734/2605) 2025-12-01 13:15:06,780 [INFO] Skipping bill 1882805 - already processed (1735/2605) 2025-12-01 13:15:06,780 [INFO] Skipping bill 1811586 - already processed (1736/2605) 2025-12-01 13:15:06,780 [INFO] Skipping bill 1794392 - already processed (1737/2605) 2025-12-01 13:15:06,780 [INFO] Processing 1738/2605: Bill ID 1844899 2025-12-01 13:15:07,304 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:07,305 [ERROR] Failed to generate report for bill 1844899: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:08,315 [INFO] Skipping bill 1954171 - already processed (1739/2605) 2025-12-01 13:15:08,316 [INFO] Skipping bill 1911041 - already processed (1740/2605) 2025-12-01 13:15:08,316 [INFO] Skipping bill 1963098 - already processed (1741/2605) 2025-12-01 13:15:08,316 [INFO] Skipping bill 1943827 - already processed (1742/2605) 2025-12-01 13:15:08,316 [INFO] Skipping bill 1968353 - already processed (1743/2605) 2025-12-01 13:15:08,317 [INFO] Skipping bill 1981617 - already processed (1744/2605) 2025-12-01 13:15:08,317 [INFO] Skipping bill 1995499 - already processed (1745/2605) 2025-12-01 13:15:08,317 [INFO] Skipping bill 1954569 - already processed (1746/2605) 2025-12-01 13:15:08,317 [INFO] Skipping bill 1950395 - already processed (1747/2605) 2025-12-01 13:15:08,317 [INFO] Skipping bill 1989323 - already processed (1748/2605) 2025-12-01 13:15:08,317 [INFO] Skipping bill 1904576 - already processed (1749/2605) 2025-12-01 13:15:08,317 [INFO] Skipping bill 1968434 - already processed (1750/2605) 2025-12-01 13:15:08,318 [INFO] Processing 1751/2605: Bill ID 2046115 2025-12-01 13:15:09,249 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:09,252 [ERROR] Failed to generate report for bill 2046115: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:10,262 [INFO] Skipping bill 1912099 - already processed (1752/2605) 2025-12-01 13:15:10,263 [INFO] Skipping bill 1946923 - already processed (1753/2605) 2025-12-01 13:15:10,263 [INFO] Processing 1754/2605: Bill ID 2046119 2025-12-01 13:15:11,018 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:11,020 [ERROR] Failed to generate report for bill 2046119: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:12,032 [INFO] Processing 1755/2605: Bill ID 1897901 2025-12-01 13:15:13,245 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:13,248 [ERROR] Failed to generate report for bill 1897901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:14,259 [INFO] Processing 1756/2605: Bill ID 1948482 2025-12-01 13:15:15,194 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:15,197 [ERROR] Failed to generate report for bill 1948482: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:16,209 [INFO] Skipping bill 1800317 - already processed (1757/2605) 2025-12-01 13:15:16,210 [INFO] Skipping bill 1800156 - already processed (1758/2605) 2025-12-01 13:15:16,210 [INFO] Skipping bill 1854552 - already processed (1759/2605) 2025-12-01 13:15:16,211 [INFO] Skipping bill 1680053 - already processed (1760/2605) 2025-12-01 13:15:16,211 [INFO] Skipping bill 1682772 - already processed (1761/2605) 2025-12-01 13:15:16,211 [INFO] Skipping bill 1737434 - already processed (1762/2605) 2025-12-01 13:15:16,211 [INFO] Skipping bill 1981655 - already processed (1763/2605) 2025-12-01 13:15:16,211 [INFO] Skipping bill 1982851 - already processed (1764/2605) 2025-12-01 13:15:16,211 [INFO] Skipping bill 1934587 - already processed (1765/2605) 2025-12-01 13:15:16,212 [INFO] Skipping bill 1981303 - already processed (1766/2605) 2025-12-01 13:15:16,212 [INFO] Skipping bill 1983676 - already processed (1767/2605) 2025-12-01 13:15:16,212 [INFO] Skipping bill 1969845 - already processed (1768/2605) 2025-12-01 13:15:16,212 [INFO] Skipping bill 1983355 - already processed (1769/2605) 2025-12-01 13:15:16,212 [INFO] Skipping bill 2009795 - already processed (1770/2605) 2025-12-01 13:15:16,213 [INFO] Skipping bill 1973485 - already processed (1771/2605) 2025-12-01 13:15:16,213 [INFO] Skipping bill 1967494 - already processed (1772/2605) 2025-12-01 13:15:16,213 [INFO] Skipping bill 1973283 - already processed (1773/2605) 2025-12-01 13:15:16,214 [INFO] Skipping bill 1639846 - already processed (1774/2605) 2025-12-01 13:15:16,214 [INFO] Skipping bill 1646426 - already processed (1775/2605) 2025-12-01 13:15:16,214 [INFO] Skipping bill 1673591 - already processed (1776/2605) 2025-12-01 13:15:16,214 [INFO] Skipping bill 1639749 - already processed (1777/2605) 2025-12-01 13:15:16,214 [INFO] Skipping bill 1655379 - already processed (1778/2605) 2025-12-01 13:15:16,214 [INFO] Skipping bill 1630766 - already processed (1779/2605) 2025-12-01 13:15:16,214 [INFO] Skipping bill 1630878 - already processed (1780/2605) 2025-12-01 13:15:16,215 [INFO] Skipping bill 1630898 - already processed (1781/2605) 2025-12-01 13:15:16,215 [INFO] Skipping bill 1645265 - already processed (1782/2605) 2025-12-01 13:15:16,215 [INFO] Skipping bill 1650459 - already processed (1783/2605) 2025-12-01 13:15:16,215 [INFO] Skipping bill 1645172 - already processed (1784/2605) 2025-12-01 13:15:16,215 [INFO] Skipping bill 1630804 - already processed (1785/2605) 2025-12-01 13:15:16,215 [INFO] Skipping bill 1630761 - already processed (1786/2605) 2025-12-01 13:15:16,215 [INFO] Skipping bill 1652712 - already processed (1787/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1633968 - already processed (1788/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1644865 - already processed (1789/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1645061 - already processed (1790/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1809843 - already processed (1791/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1811981 - already processed (1792/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1812040 - already processed (1793/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1798563 - already processed (1794/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1807894 - already processed (1795/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1798580 - already processed (1796/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1800951 - already processed (1797/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1808295 - already processed (1798/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1799462 - already processed (1799/2605) 2025-12-01 13:15:16,216 [INFO] Skipping bill 1808024 - already processed (1800/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1807991 - already processed (1801/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1812376 - already processed (1802/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1822475 - already processed (1803/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1811644 - already processed (1804/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1794980 - already processed (1805/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1808264 - already processed (1806/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1801793 - already processed (1807/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1799221 - already processed (1808/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1822208 - already processed (1809/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1800673 - already processed (1810/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1809026 - already processed (1811/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1812182 - already processed (1812/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1886330 - already processed (1813/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1904645 - already processed (1814/2605) 2025-12-01 13:15:16,217 [INFO] Skipping bill 1911036 - already processed (1815/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1904674 - already processed (1816/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1901323 - already processed (1817/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1904347 - already processed (1818/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1925485 - already processed (1819/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1886222 - already processed (1820/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1905613 - already processed (1821/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1912330 - already processed (1822/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1914968 - already processed (1823/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1925408 - already processed (1824/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1886065 - already processed (1825/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1905445 - already processed (1826/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1905965 - already processed (1827/2605) 2025-12-01 13:15:16,218 [INFO] Skipping bill 1886188 - already processed (1828/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1905894 - already processed (1829/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1912145 - already processed (1830/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1927784 - already processed (1831/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1941702 - already processed (1832/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1929947 - already processed (1833/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1905942 - already processed (1834/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1912012 - already processed (1835/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1905698 - already processed (1836/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1886051 - already processed (1837/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1932239 - already processed (1838/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1932502 - already processed (1839/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1885937 - already processed (1840/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1900803 - already processed (1841/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1905712 - already processed (1842/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1905995 - already processed (1843/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1902641 - already processed (1844/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1905891 - already processed (1845/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1905860 - already processed (1846/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1908254 - already processed (1847/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1905920 - already processed (1848/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1886241 - already processed (1849/2605) 2025-12-01 13:15:16,219 [INFO] Skipping bill 1886007 - already processed (1850/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1896347 - already processed (1851/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1905982 - already processed (1852/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1898426 - already processed (1853/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1791614 - already processed (1854/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1792210 - already processed (1855/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1825997 - already processed (1856/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1792205 - already processed (1857/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1801141 - already processed (1858/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1796759 - already processed (1859/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1794124 - already processed (1860/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1680711 - already processed (1861/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1686234 - already processed (1862/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1813390 - already processed (1863/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1797745 - already processed (1864/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1810331 - already processed (1865/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1813358 - already processed (1866/2605) 2025-12-01 13:15:16,220 [INFO] Skipping bill 1657734 - already processed (1867/2605) 2025-12-01 13:15:16,220 [INFO] Processing 1868/2605: Bill ID 1644054 2025-12-01 13:15:17,443 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:17,445 [ERROR] Failed to generate report for bill 1644054: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:18,456 [INFO] Processing 1869/2605: Bill ID 1645282 2025-12-01 13:15:19,593 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:19,595 [ERROR] Failed to generate report for bill 1645282: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:20,608 [INFO] Processing 1870/2605: Bill ID 1644063 2025-12-01 13:15:21,207 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:21,211 [ERROR] Failed to generate report for bill 1644063: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:21,269 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:15:21,269 [INFO] Progress: 1870/2605 - Processed: 0, Skipped: 1789, Errors: 81 2025-12-01 13:15:22,274 [INFO] Processing 1871/2605: Bill ID 1645384 2025-12-01 13:15:23,590 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:23,592 [ERROR] Failed to generate report for bill 1645384: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:24,602 [INFO] Processing 1872/2605: Bill ID 1645468 2025-12-01 13:15:25,326 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:25,330 [ERROR] Failed to generate report for bill 1645468: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:26,340 [INFO] Processing 1873/2605: Bill ID 1796787 2025-12-01 13:15:27,586 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:27,587 [ERROR] Failed to generate report for bill 1796787: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:28,595 [INFO] Processing 1874/2605: Bill ID 1643905 2025-12-01 13:15:32,904 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:32,906 [ERROR] Failed to generate report for bill 1643905: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:33,916 [INFO] Processing 1875/2605: Bill ID 1796722 2025-12-01 13:15:35,363 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:35,365 [ERROR] Failed to generate report for bill 1796722: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:36,376 [INFO] Skipping bill 1952329 - already processed (1876/2605) 2025-12-01 13:15:36,376 [INFO] Skipping bill 1964254 - already processed (1877/2605) 2025-12-01 13:15:36,377 [INFO] Skipping bill 1904212 - already processed (1878/2605) 2025-12-01 13:15:36,377 [INFO] Skipping bill 1903879 - already processed (1879/2605) 2025-12-01 13:15:36,377 [INFO] Skipping bill 1930459 - already processed (1880/2605) 2025-12-01 13:15:36,377 [INFO] Skipping bill 1938736 - already processed (1881/2605) 2025-12-01 13:15:36,377 [INFO] Skipping bill 1941657 - already processed (1882/2605) 2025-12-01 13:15:36,378 [INFO] Skipping bill 1932498 - already processed (1883/2605) 2025-12-01 13:15:36,378 [INFO] Skipping bill 1898840 - already processed (1884/2605) 2025-12-01 13:15:36,378 [INFO] Skipping bill 1903962 - already processed (1885/2605) 2025-12-01 13:15:36,378 [INFO] Skipping bill 1943677 - already processed (1886/2605) 2025-12-01 13:15:36,378 [INFO] Skipping bill 1911202 - already processed (1887/2605) 2025-12-01 13:15:36,379 [INFO] Skipping bill 1898343 - already processed (1888/2605) 2025-12-01 13:15:36,379 [INFO] Skipping bill 1930701 - already processed (1889/2605) 2025-12-01 13:15:36,379 [INFO] Skipping bill 1911699 - already processed (1890/2605) 2025-12-01 13:15:36,379 [INFO] Skipping bill 1985707 - already processed (1891/2605) 2025-12-01 13:15:36,379 [INFO] Skipping bill 2025140 - already processed (1892/2605) 2025-12-01 13:15:36,379 [INFO] Processing 1893/2605: Bill ID 1916784 2025-12-01 13:15:37,103 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:37,106 [ERROR] Failed to generate report for bill 1916784: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:38,115 [INFO] Processing 1894/2605: Bill ID 1908012 2025-12-01 13:15:39,562 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:39,564 [ERROR] Failed to generate report for bill 1908012: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:40,575 [INFO] Processing 1895/2605: Bill ID 1907961 2025-12-01 13:15:42,123 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:42,124 [ERROR] Failed to generate report for bill 1907961: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:43,130 [INFO] Processing 1896/2605: Bill ID 1907826 2025-12-01 13:15:44,170 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:44,172 [ERROR] Failed to generate report for bill 1907826: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:45,184 [INFO] Processing 1897/2605: Bill ID 2023840 2025-12-01 13:15:47,141 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:47,142 [ERROR] Failed to generate report for bill 2023840: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:48,153 [INFO] Processing 1898/2605: Bill ID 1907778 2025-12-01 13:15:49,188 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:49,190 [ERROR] Failed to generate report for bill 1907778: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:50,200 [INFO] Skipping bill 1691917 - already processed (1899/2605) 2025-12-01 13:15:50,201 [INFO] Skipping bill 1695960 - already processed (1900/2605) 2025-12-01 13:15:50,201 [INFO] Skipping bill 1850601 - already processed (1901/2605) 2025-12-01 13:15:50,201 [INFO] Skipping bill 1838098 - already processed (1902/2605) 2025-12-01 13:15:50,201 [INFO] Skipping bill 1842521 - already processed (1903/2605) 2025-12-01 13:15:50,202 [INFO] Skipping bill 1809518 - already processed (1904/2605) 2025-12-01 13:15:50,202 [INFO] Skipping bill 1839623 - already processed (1905/2605) 2025-12-01 13:15:50,202 [INFO] Skipping bill 1836854 - already processed (1906/2605) 2025-12-01 13:15:50,202 [INFO] Skipping bill 1828203 - already processed (1907/2605) 2025-12-01 13:15:50,202 [INFO] Skipping bill 1823415 - already processed (1908/2605) 2025-12-01 13:15:50,202 [INFO] Processing 1909/2605: Bill ID 1809702 2025-12-01 13:15:51,133 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:51,136 [ERROR] Failed to generate report for bill 1809702: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:52,142 [INFO] Processing 1910/2605: Bill ID 1812739 2025-12-01 13:15:53,181 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:53,184 [ERROR] Failed to generate report for bill 1812739: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:53,245 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:15:53,245 [INFO] Progress: 1910/2605 - Processed: 0, Skipped: 1816, Errors: 94 2025-12-01 13:15:54,250 [INFO] Skipping bill 1993190 - already processed (1911/2605) 2025-12-01 13:15:54,252 [INFO] Skipping bill 2009723 - already processed (1912/2605) 2025-12-01 13:15:54,252 [INFO] Skipping bill 1970932 - already processed (1913/2605) 2025-12-01 13:15:54,252 [INFO] Skipping bill 1990795 - already processed (1914/2605) 2025-12-01 13:15:54,252 [INFO] Skipping bill 1966877 - already processed (1915/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 1972008 - already processed (1916/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 1994548 - already processed (1917/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 1991745 - already processed (1918/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 2010818 - already processed (1919/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 2003316 - already processed (1920/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 2021830 - already processed (1921/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 2009667 - already processed (1922/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 2011559 - already processed (1923/2605) 2025-12-01 13:15:54,253 [INFO] Skipping bill 1981081 - already processed (1924/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1990559 - already processed (1925/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1968858 - already processed (1926/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1841344 - already processed (1927/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1837111 - already processed (1928/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1783445 - already processed (1929/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1854251 - already processed (1930/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1867071 - already processed (1931/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1782940 - already processed (1932/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1780646 - already processed (1933/2605) 2025-12-01 13:15:54,254 [INFO] Skipping bill 1781005 - already processed (1934/2605) 2025-12-01 13:15:54,254 [INFO] Processing 1935/2605: Bill ID 1709614 2025-12-01 13:15:56,662 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:15:56,665 [ERROR] Failed to generate report for bill 1709614: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:15:57,677 [INFO] Processing 1936/2605: Bill ID 1709655 2025-12-01 13:16:00,451 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:00,453 [ERROR] Failed to generate report for bill 1709655: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:01,464 [INFO] Skipping bill 2034598 - already processed (1937/2605) 2025-12-01 13:16:01,464 [INFO] Skipping bill 2034722 - already processed (1938/2605) 2025-12-01 13:16:01,465 [INFO] Skipping bill 2038518 - already processed (1939/2605) 2025-12-01 13:16:01,465 [INFO] Skipping bill 2039752 - already processed (1940/2605) 2025-12-01 13:16:01,465 [INFO] Skipping bill 2044087 - already processed (1941/2605) 2025-12-01 13:16:01,465 [INFO] Skipping bill 2042614 - already processed (1942/2605) 2025-12-01 13:16:01,465 [INFO] Skipping bill 2045155 - already processed (1943/2605) 2025-12-01 13:16:01,465 [INFO] Skipping bill 2045662 - already processed (1944/2605) 2025-12-01 13:16:01,465 [INFO] Processing 1945/2605: Bill ID 1974122 2025-12-01 13:16:04,037 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:04,038 [ERROR] Failed to generate report for bill 1974122: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:05,043 [INFO] Processing 1946/2605: Bill ID 1974279 2025-12-01 13:16:07,516 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:07,516 [ERROR] Failed to generate report for bill 1974279: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:08,524 [INFO] Skipping bill 2047792 - already processed (1947/2605) 2025-12-01 13:16:08,524 [INFO] Skipping bill 1842729 - already processed (1948/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1842887 - already processed (1949/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1939111 - already processed (1950/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1895001 - already processed (1951/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1945993 - already processed (1952/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1945813 - already processed (1953/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1774433 - already processed (1954/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1884990 - already processed (1955/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1882572 - already processed (1956/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1784131 - already processed (1957/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1873726 - already processed (1958/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1882205 - already processed (1959/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1860116 - already processed (1960/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1835790 - already processed (1961/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1835624 - already processed (1962/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1876647 - already processed (1963/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1887447 - already processed (1964/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1898165 - already processed (1965/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1780760 - already processed (1966/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1887744 - already processed (1967/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1782128 - already processed (1968/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1887739 - already processed (1969/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1885322 - already processed (1970/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1887646 - already processed (1971/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1897119 - already processed (1972/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1782539 - already processed (1973/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1880117 - already processed (1974/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1810734 - already processed (1975/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1887671 - already processed (1976/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1883053 - already processed (1977/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1861062 - already processed (1978/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1775461 - already processed (1979/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1792331 - already processed (1980/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1765384 - already processed (1981/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1863023 - already processed (1982/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1883034 - already processed (1983/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1886748 - already processed (1984/2605) 2025-12-01 13:16:08,525 [INFO] Skipping bill 1886756 - already processed (1985/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1885278 - already processed (1986/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1784087 - already processed (1987/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1886439 - already processed (1988/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1877586 - already processed (1989/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1888775 - already processed (1990/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1773844 - already processed (1991/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1857956 - already processed (1992/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1775721 - already processed (1993/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1861016 - already processed (1994/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1884504 - already processed (1995/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1892975 - already processed (1996/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1886714 - already processed (1997/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1877214 - already processed (1998/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1779520 - already processed (1999/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1882161 - already processed (2000/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1793734 - already processed (2001/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1885501 - already processed (2002/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1887169 - already processed (2003/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1877680 - already processed (2004/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1887282 - already processed (2005/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1774766 - already processed (2006/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1774961 - already processed (2007/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1866654 - already processed (2008/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1779127 - already processed (2009/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1882224 - already processed (2010/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1892198 - already processed (2011/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1759862 - already processed (2012/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1888377 - already processed (2013/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1894701 - already processed (2014/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1864751 - already processed (2015/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1772453 - already processed (2016/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1885309 - already processed (2017/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1886447 - already processed (2018/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1848736 - already processed (2019/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1884301 - already processed (2020/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1881976 - already processed (2021/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1885426 - already processed (2022/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1775334 - already processed (2023/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1884442 - already processed (2024/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1881980 - already processed (2025/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1893238 - already processed (2026/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1865594 - already processed (2027/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1872732 - already processed (2028/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1885341 - already processed (2029/2605) 2025-12-01 13:16:08,526 [INFO] Skipping bill 1764018 - already processed (2030/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1887315 - already processed (2031/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1751404 - already processed (2032/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1888249 - already processed (2033/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1885249 - already processed (2034/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1881398 - already processed (2035/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1866637 - already processed (2036/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1770194 - already processed (2037/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1775580 - already processed (2038/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1784705 - already processed (2039/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1831382 - already processed (2040/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1885274 - already processed (2041/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1892393 - already processed (2042/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1877691 - already processed (2043/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1776083 - already processed (2044/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1760978 - already processed (2045/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1764682 - already processed (2046/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1880344 - already processed (2047/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1886698 - already processed (2048/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1876488 - already processed (2049/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1765330 - already processed (2050/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1887359 - already processed (2051/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1771744 - already processed (2052/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1831359 - already processed (2053/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1774102 - already processed (2054/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1774479 - already processed (2055/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1794846 - already processed (2056/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1894867 - already processed (2057/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1774859 - already processed (2058/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1884522 - already processed (2059/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1866979 - already processed (2060/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1886705 - already processed (2061/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1898170 - already processed (2062/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1885330 - already processed (2063/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1792286 - already processed (2064/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1892877 - already processed (2065/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1884177 - already processed (2066/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1774713 - already processed (2067/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1774626 - already processed (2068/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1884513 - already processed (2069/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1887362 - already processed (2070/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1893236 - already processed (2071/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1883668 - already processed (2072/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1831371 - already processed (2073/2605) 2025-12-01 13:16:08,527 [INFO] Skipping bill 1885671 - already processed (2074/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1885535 - already processed (2075/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1888766 - already processed (2076/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1892506 - already processed (2077/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1892532 - already processed (2078/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1878820 - already processed (2079/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1884926 - already processed (2080/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1895881 - already processed (2081/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1778284 - already processed (2082/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1770920 - already processed (2083/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1650801 - already processed (2084/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1883378 - already processed (2085/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1683970 - already processed (2086/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1772792 - already processed (2087/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1759623 - already processed (2088/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1760525 - already processed (2089/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1862531 - already processed (2090/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1767461 - already processed (2091/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1776485 - already processed (2092/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1871231 - already processed (2093/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1887711 - already processed (2094/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1893243 - already processed (2095/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1701254 - already processed (2096/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1897456 - already processed (2097/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1775615 - already processed (2098/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1794843 - already processed (2099/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1810720 - already processed (2100/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1894308 - already processed (2101/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1894683 - already processed (2102/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1842456 - already processed (2103/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1885281 - already processed (2104/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1759897 - already processed (2105/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1860079 - already processed (2106/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1746098 - already processed (2107/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1897489 - already processed (2108/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1887287 - already processed (2109/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1885252 - already processed (2110/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1892936 - already processed (2111/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1732925 - already processed (2112/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1746069 - already processed (2113/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1774408 - already processed (2114/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1772182 - already processed (2115/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1884422 - already processed (2116/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1687118 - already processed (2117/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1784726 - already processed (2118/2605) 2025-12-01 13:16:08,528 [INFO] Skipping bill 1762912 - already processed (2119/2605) 2025-12-01 13:16:08,529 [INFO] Skipping bill 1898405 - already processed (2120/2605) 2025-12-01 13:16:08,529 [INFO] Processing 2121/2605: Bill ID 1884189 2025-12-01 13:16:10,075 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:10,076 [ERROR] Failed to generate report for bill 1884189: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:11,084 [INFO] Skipping bill 1899847 - already processed (2122/2605) 2025-12-01 13:16:11,084 [INFO] Skipping bill 1732984 - already processed (2123/2605) 2025-12-01 13:16:11,084 [INFO] Skipping bill 1746089 - already processed (2124/2605) 2025-12-01 13:16:11,084 [INFO] Skipping bill 1766726 - already processed (2125/2605) 2025-12-01 13:16:11,084 [INFO] Skipping bill 1769804 - already processed (2126/2605) 2025-12-01 13:16:11,084 [INFO] Skipping bill 1897097 - already processed (2127/2605) 2025-12-01 13:16:11,085 [INFO] Processing 2128/2605: Bill ID 1774177 2025-12-01 13:16:12,532 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:12,533 [ERROR] Failed to generate report for bill 1774177: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:13,539 [INFO] Skipping bill 1757049 - already processed (2129/2605) 2025-12-01 13:16:13,539 [INFO] Skipping bill 1784298 - already processed (2130/2605) 2025-12-01 13:16:13,539 [INFO] Skipping bill 1785108 - already processed (2131/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1772128 - already processed (2132/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1879910 - already processed (2133/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1777717 - already processed (2134/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1843401 - already processed (2135/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1774203 - already processed (2136/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1892268 - already processed (2137/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1774216 - already processed (2138/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1868870 - already processed (2139/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1770792 - already processed (2140/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1894823 - already processed (2141/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1885629 - already processed (2142/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1866980 - already processed (2143/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1826236 - already processed (2144/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1860115 - already processed (2145/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1767424 - already processed (2146/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1877069 - already processed (2147/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1865576 - already processed (2148/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1771076 - already processed (2149/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1755580 - already processed (2150/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1885029 - already processed (2151/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1770955 - already processed (2152/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1772617 - already processed (2153/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1760193 - already processed (2154/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1871212 - already processed (2155/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1887934 - already processed (2156/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1879177 - already processed (2157/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1897536 - already processed (2158/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1854133 - already processed (2159/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1761508 - already processed (2160/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1777284 - already processed (2161/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1774079 - already processed (2162/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1896271 - already processed (2163/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1897312 - already processed (2164/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1774750 - already processed (2165/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1873661 - already processed (2166/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1782516 - already processed (2167/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1782446 - already processed (2168/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1866649 - already processed (2169/2605) 2025-12-01 13:16:13,540 [INFO] Skipping bill 1866664 - already processed (2170/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1707867 - already processed (2171/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1872167 - already processed (2172/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1759875 - already processed (2173/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1789214 - already processed (2174/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1872153 - already processed (2175/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1760229 - already processed (2176/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1774942 - already processed (2177/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1694059 - already processed (2178/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1829219 - already processed (2179/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1679271 - already processed (2180/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1883365 - already processed (2181/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1780777 - already processed (2182/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1707919 - already processed (2183/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1860113 - already processed (2184/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1781933 - already processed (2185/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1751388 - already processed (2186/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1754500 - already processed (2187/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1772123 - already processed (2188/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1892924 - already processed (2189/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1778422 - already processed (2190/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1897294 - already processed (2191/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1769557 - already processed (2192/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1747003 - already processed (2193/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1775420 - already processed (2194/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1885460 - already processed (2195/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1778494 - already processed (2196/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1778507 - already processed (2197/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1746072 - already processed (2198/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1747808 - already processed (2199/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1764055 - already processed (2200/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1765960 - already processed (2201/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1766587 - already processed (2202/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1766736 - already processed (2203/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1771518 - already processed (2204/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1772577 - already processed (2205/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1772933 - already processed (2206/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1773303 - already processed (2207/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1775354 - already processed (2208/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1777649 - already processed (2209/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1783786 - already processed (2210/2605) 2025-12-01 13:16:13,541 [INFO] Skipping bill 1783927 - already processed (2211/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1791735 - already processed (2212/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1791984 - already processed (2213/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1860914 - already processed (2214/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1874964 - already processed (2215/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1876702 - already processed (2216/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1878298 - already processed (2217/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1878970 - already processed (2218/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1878883 - already processed (2219/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1880262 - already processed (2220/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1880301 - already processed (2221/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1880312 - already processed (2222/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1882770 - already processed (2223/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1889897 - already processed (2224/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1892711 - already processed (2225/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1897258 - already processed (2226/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1881528 - already processed (2227/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1782893 - already processed (2228/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1834554 - already processed (2229/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1774082 - already processed (2230/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1783631 - already processed (2231/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1879351 - already processed (2232/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1707921 - already processed (2233/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1872751 - already processed (2234/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1848738 - already processed (2235/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1882577 - already processed (2236/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1880072 - already processed (2237/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1880345 - already processed (2238/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1892804 - already processed (2239/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1860940 - already processed (2240/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1766003 - already processed (2241/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1775441 - already processed (2242/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1758619 - already processed (2243/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1894461 - already processed (2244/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1778171 - already processed (2245/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1778004 - already processed (2246/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1832839 - already processed (2247/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1774844 - already processed (2248/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1751449 - already processed (2249/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1751346 - already processed (2250/2605) 2025-12-01 13:16:13,542 [INFO] Skipping bill 1759080 - already processed (2251/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1882756 - already processed (2252/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1882766 - already processed (2253/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1887196 - already processed (2254/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1889949 - already processed (2255/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1887718 - already processed (2256/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1896232 - already processed (2257/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1783562 - already processed (2258/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1681772 - already processed (2259/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1871711 - already processed (2260/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1874986 - already processed (2261/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1772204 - already processed (2262/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1884912 - already processed (2263/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1888175 - already processed (2264/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1832721 - already processed (2265/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1887649 - already processed (2266/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1887704 - already processed (2267/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1881672 - already processed (2268/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1777454 - already processed (2269/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1882397 - already processed (2270/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1766671 - already processed (2271/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1775036 - already processed (2272/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1694305 - already processed (2273/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1863407 - already processed (2274/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1746051 - already processed (2275/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1882537 - already processed (2276/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1873551 - already processed (2277/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1762960 - already processed (2278/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1887303 - already processed (2279/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1887118 - already processed (2280/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1775679 - already processed (2281/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1882373 - already processed (2282/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1862520 - already processed (2283/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1886817 - already processed (2284/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1750558 - already processed (2285/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1750336 - already processed (2286/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1694173 - already processed (2287/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1864746 - already processed (2288/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1887915 - already processed (2289/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1774093 - already processed (2290/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1650659 - already processed (2291/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1694050 - already processed (2292/2605) 2025-12-01 13:16:13,543 [INFO] Skipping bill 1771092 - already processed (2293/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1876599 - already processed (2294/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1835788 - already processed (2295/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1782691 - already processed (2296/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1876668 - already processed (2297/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1729737 - already processed (2298/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1766627 - already processed (2299/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1885388 - already processed (2300/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1887130 - already processed (2301/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1775597 - already processed (2302/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1793999 - already processed (2303/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1789198 - already processed (2304/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1888330 - already processed (2305/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1882746 - already processed (2306/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1694182 - already processed (2307/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1860920 - already processed (2308/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1774448 - already processed (2309/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1774405 - already processed (2310/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1876990 - already processed (2311/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1876679 - already processed (2312/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1881973 - already processed (2313/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1717622 - already processed (2314/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1885510 - already processed (2315/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1871269 - already processed (2316/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1774266 - already processed (2317/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1785924 - already processed (2318/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1779428 - already processed (2319/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1775195 - already processed (2320/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1775134 - already processed (2321/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1743524 - already processed (2322/2605) 2025-12-01 13:16:13,544 [INFO] Skipping bill 1757473 - already processed (2323/2605) 2025-12-01 13:16:13,544 [INFO] Processing 2324/2605: Bill ID 1857970 2025-12-01 13:16:14,274 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:14,274 [ERROR] Failed to generate report for bill 1857970: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:15,278 [INFO] Skipping bill 1883678 - already processed (2325/2605) 2025-12-01 13:16:15,278 [INFO] Processing 2326/2605: Bill ID 1897245 2025-12-01 13:16:19,140 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:19,141 [ERROR] Failed to generate report for bill 1897245: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:20,148 [INFO] Skipping bill 1894517 - already processed (2327/2605) 2025-12-01 13:16:20,148 [INFO] Processing 2328/2605: Bill ID 1898241 2025-12-01 13:16:21,035 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:41,328 [INFO] Loaded 2605 existing reports from data/bill_reports.json 2025-12-01 13:16:41,328 [INFO] Starting report generation for 2605 bills 2025-12-01 13:16:41,328 [INFO] Skipping bill 1769530 - already processed (1/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1765118 - already processed (2/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1745017 - already processed (3/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1745230 - already processed (4/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1847915 - already processed (5/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1847210 - already processed (6/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1847980 - already processed (7/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1840627 - already processed (8/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1840340 - already processed (9/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 2019785 - already processed (10/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1983607 - already processed (11/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 2019702 - already processed (12/2605) 2025-12-01 13:16:41,328 [INFO] Skipping bill 1987220 - already processed (13/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 2022389 - already processed (14/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1959465 - already processed (15/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 2023982 - already processed (16/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 2019732 - already processed (17/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1969654 - already processed (18/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1956622 - already processed (19/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1957166 - already processed (20/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1869518 - already processed (21/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1813560 - already processed (22/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1836190 - already processed (23/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1851112 - already processed (24/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1745943 - already processed (25/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1737840 - already processed (26/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1814309 - already processed (27/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1851143 - already processed (28/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1984991 - already processed (29/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1912439 - already processed (30/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1912476 - already processed (31/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1940708 - already processed (32/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1935103 - already processed (33/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1685926 - already processed (34/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1657717 - already processed (35/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1683096 - already processed (36/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1828964 - already processed (37/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1830782 - already processed (38/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1829010 - already processed (39/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1810349 - already processed (40/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1810356 - already processed (41/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1804209 - already processed (42/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1830673 - already processed (43/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1923768 - already processed (44/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1935042 - already processed (45/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1948089 - already processed (46/2605) 2025-12-01 13:16:41,329 [INFO] Skipping bill 1917064 - already processed (47/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1964274 - already processed (48/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1949161 - already processed (49/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1938396 - already processed (50/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1955446 - already processed (51/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1946736 - already processed (52/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 2037727 - already processed (53/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1730253 - already processed (54/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1721706 - already processed (55/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1975090 - already processed (56/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1946146 - already processed (57/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 2018186 - already processed (58/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 2011735 - already processed (59/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1897622 - already processed (60/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1973543 - already processed (61/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 2009462 - already processed (62/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 2011658 - already processed (63/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1944017 - already processed (64/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1892641 - already processed (65/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 2010078 - already processed (66/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1915632 - already processed (67/2605) 2025-12-01 13:16:41,330 [INFO] Skipping bill 1996393 - already processed (68/2605) 2025-12-01 13:16:41,330 [INFO] Processing 69/2605: Bill ID 1972479 2025-12-01 13:16:42,821 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:42,824 [ERROR] Failed to generate report for bill 1972479: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:43,837 [INFO] Skipping bill 1848589 - already processed (70/2605) 2025-12-01 13:16:43,837 [INFO] Skipping bill 1796695 - already processed (71/2605) 2025-12-01 13:16:43,837 [INFO] Skipping bill 1834299 - already processed (72/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1840453 - already processed (73/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1847401 - already processed (74/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1849339 - already processed (75/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1845122 - already processed (76/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1796692 - already processed (77/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1846289 - already processed (78/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1813231 - already processed (79/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1848433 - already processed (80/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1796691 - already processed (81/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1848536 - already processed (82/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1819737 - already processed (83/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1829037 - already processed (84/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1712200 - already processed (85/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1848424 - already processed (86/2605) 2025-12-01 13:16:43,838 [INFO] Skipping bill 1814918 - already processed (87/2605) 2025-12-01 13:16:43,839 [INFO] Skipping bill 1686429 - already processed (88/2605) 2025-12-01 13:16:43,839 [INFO] Skipping bill 1848359 - already processed (89/2605) 2025-12-01 13:16:43,839 [INFO] Skipping bill 1697069 - already processed (90/2605) 2025-12-01 13:16:43,839 [INFO] Skipping bill 1848453 - already processed (91/2605) 2025-12-01 13:16:43,839 [INFO] Skipping bill 1849513 - already processed (92/2605) 2025-12-01 13:16:43,839 [INFO] Skipping bill 1848521 - already processed (93/2605) 2025-12-01 13:16:43,839 [INFO] Skipping bill 1848425 - already processed (94/2605) 2025-12-01 13:16:43,839 [INFO] Skipping bill 1702816 - already processed (95/2605) 2025-12-01 13:16:43,840 [INFO] Skipping bill 1849367 - already processed (96/2605) 2025-12-01 13:16:43,840 [INFO] Skipping bill 1849520 - already processed (97/2605) 2025-12-01 13:16:43,840 [INFO] Skipping bill 1848530 - already processed (98/2605) 2025-12-01 13:16:43,840 [INFO] Skipping bill 1712027 - already processed (99/2605) 2025-12-01 13:16:43,840 [INFO] Skipping bill 1849659 - already processed (100/2605) 2025-12-01 13:16:43,840 [INFO] Skipping bill 1848478 - already processed (101/2605) 2025-12-01 13:16:43,840 [INFO] Skipping bill 1848387 - already processed (102/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1845137 - already processed (103/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1812205 - already processed (104/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1798416 - already processed (105/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1847351 - already processed (106/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1693943 - already processed (107/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1686454 - already processed (108/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1847404 - already processed (109/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1683775 - already processed (110/2605) 2025-12-01 13:16:43,841 [INFO] Skipping bill 1835452 - already processed (111/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1709727 - already processed (112/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1849724 - already processed (113/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1761500 - already processed (114/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1697048 - already processed (115/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1860070 - already processed (116/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1771300 - already processed (117/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1709708 - already processed (118/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1848529 - already processed (119/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1845179 - already processed (120/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1849404 - already processed (121/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1714444 - already processed (122/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1824468 - already processed (123/2605) 2025-12-01 13:16:43,842 [INFO] Skipping bill 1882346 - already processed (124/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1885654 - already processed (125/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1849359 - already processed (126/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1840414 - already processed (127/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1846229 - already processed (128/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1707510 - already processed (129/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1845188 - already processed (130/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1848524 - already processed (131/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1847496 - already processed (132/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1883008 - already processed (133/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1649620 - already processed (134/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1667841 - already processed (135/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1848476 - already processed (136/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1649670 - already processed (137/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1667891 - already processed (138/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1649612 - already processed (139/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1649615 - already processed (140/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1667833 - already processed (141/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1667836 - already processed (142/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1649618 - already processed (143/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1667839 - already processed (144/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1649630 - already processed (145/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1649619 - already processed (146/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1667851 - already processed (147/2605) 2025-12-01 13:16:43,843 [INFO] Skipping bill 1667840 - already processed (148/2605) 2025-12-01 13:16:43,843 [INFO] Processing 149/2605: Bill ID 1865211 2025-12-01 13:16:44,893 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:44,894 [ERROR] Failed to generate report for bill 1865211: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:45,903 [INFO] Skipping bill 1667837 - already processed (150/2605) 2025-12-01 13:16:45,903 [INFO] Skipping bill 1667892 - already processed (151/2605) 2025-12-01 13:16:45,903 [INFO] Skipping bill 1649616 - already processed (152/2605) 2025-12-01 13:16:45,903 [INFO] Skipping bill 1649671 - already processed (153/2605) 2025-12-01 13:16:45,903 [INFO] Processing 154/2605: Bill ID 1726105 2025-12-01 13:16:47,175 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:47,176 [ERROR] Failed to generate report for bill 1726105: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:48,185 [INFO] Skipping bill 1978757 - already processed (155/2605) 2025-12-01 13:16:48,185 [INFO] Skipping bill 1980543 - already processed (156/2605) 2025-12-01 13:16:48,185 [INFO] Skipping bill 1893423 - already processed (157/2605) 2025-12-01 13:16:48,185 [INFO] Skipping bill 1964699 - already processed (158/2605) 2025-12-01 13:16:48,185 [INFO] Skipping bill 1978599 - already processed (159/2605) 2025-12-01 13:16:48,185 [INFO] Skipping bill 1980563 - already processed (160/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1976585 - already processed (161/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1904800 - already processed (162/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1974530 - already processed (163/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1964676 - already processed (164/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1955758 - already processed (165/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1941749 - already processed (166/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1976440 - already processed (167/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1978812 - already processed (168/2605) 2025-12-01 13:16:48,186 [INFO] Skipping bill 1978731 - already processed (169/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1949687 - already processed (170/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1980302 - already processed (171/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 2032041 - already processed (172/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1978672 - already processed (173/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1955756 - already processed (174/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1970455 - already processed (175/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1978694 - already processed (176/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1976550 - already processed (177/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1908207 - already processed (178/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1971712 - already processed (179/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1919273 - already processed (180/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1893452 - already processed (181/2605) 2025-12-01 13:16:48,187 [INFO] Skipping bill 1971760 - already processed (182/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1978553 - already processed (183/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1980501 - already processed (184/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1980139 - already processed (185/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1908210 - already processed (186/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1980228 - already processed (187/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1947445 - already processed (188/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1971753 - already processed (189/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1943407 - already processed (190/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1896630 - already processed (191/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1953097 - already processed (192/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1961095 - already processed (193/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1953091 - already processed (194/2605) 2025-12-01 13:16:48,188 [INFO] Skipping bill 1953081 - already processed (195/2605) 2025-12-01 13:16:48,189 [INFO] Skipping bill 1978871 - already processed (196/2605) 2025-12-01 13:16:48,189 [INFO] Skipping bill 1990396 - already processed (197/2605) 2025-12-01 13:16:48,189 [INFO] Processing 198/2605: Bill ID 1980067 2025-12-01 13:16:51,447 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:51,449 [ERROR] Failed to generate report for bill 1980067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:52,457 [INFO] Skipping bill 1970450 - already processed (199/2605) 2025-12-01 13:16:52,458 [INFO] Skipping bill 1904793 - already processed (200/2605) 2025-12-01 13:16:52,458 [INFO] Skipping bill 1964689 - already processed (201/2605) 2025-12-01 13:16:52,458 [INFO] Skipping bill 1933300 - already processed (202/2605) 2025-12-01 13:16:52,458 [INFO] Skipping bill 2036404 - already processed (203/2605) 2025-12-01 13:16:52,458 [INFO] Skipping bill 1949685 - already processed (204/2605) 2025-12-01 13:16:52,459 [INFO] Skipping bill 1976474 - already processed (205/2605) 2025-12-01 13:16:52,459 [INFO] Skipping bill 1898373 - already processed (206/2605) 2025-12-01 13:16:52,459 [INFO] Skipping bill 2042443 - already processed (207/2605) 2025-12-01 13:16:52,459 [INFO] Skipping bill 2005483 - already processed (208/2605) 2025-12-01 13:16:52,459 [INFO] Skipping bill 1968261 - already processed (209/2605) 2025-12-01 13:16:52,459 [INFO] Skipping bill 1980234 - already processed (210/2605) 2025-12-01 13:16:52,459 [INFO] Skipping bill 1978559 - already processed (211/2605) 2025-12-01 13:16:52,460 [INFO] Skipping bill 1974545 - already processed (212/2605) 2025-12-01 13:16:52,460 [INFO] Skipping bill 1908089 - already processed (213/2605) 2025-12-01 13:16:52,460 [INFO] Skipping bill 1939198 - already processed (214/2605) 2025-12-01 13:16:52,460 [INFO] Skipping bill 1939199 - already processed (215/2605) 2025-12-01 13:16:52,460 [INFO] Skipping bill 1908087 - already processed (216/2605) 2025-12-01 13:16:52,460 [INFO] Skipping bill 1908088 - already processed (217/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1939200 - already processed (218/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1939201 - already processed (219/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1908090 - already processed (220/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1939197 - already processed (221/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1908086 - already processed (222/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1651326 - already processed (223/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1747628 - already processed (224/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1871619 - already processed (225/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1874953 - already processed (226/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1831016 - already processed (227/2605) 2025-12-01 13:16:52,461 [INFO] Skipping bill 1846007 - already processed (228/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 2026977 - already processed (229/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 2042502 - already processed (230/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 2042537 - already processed (231/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 2042540 - already processed (232/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1907590 - already processed (233/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1907863 - already processed (234/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 2022323 - already processed (235/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1947638 - already processed (236/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1965815 - already processed (237/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 2042471 - already processed (238/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 2017117 - already processed (239/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1973900 - already processed (240/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 2020829 - already processed (241/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1718823 - already processed (242/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1709526 - already processed (243/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1709356 - already processed (244/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1839016 - already processed (245/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1859941 - already processed (246/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1839023 - already processed (247/2605) 2025-12-01 13:16:52,462 [INFO] Skipping bill 1860727 - already processed (248/2605) 2025-12-01 13:16:52,462 [INFO] Processing 249/2605: Bill ID 1876979 2025-12-01 13:16:52,958 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:52,960 [ERROR] Failed to generate report for bill 1876979: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:53,969 [INFO] Skipping bill 1905069 - already processed (250/2605) 2025-12-01 13:16:53,969 [INFO] Skipping bill 1992824 - already processed (251/2605) 2025-12-01 13:16:53,969 [INFO] Skipping bill 1957876 - already processed (252/2605) 2025-12-01 13:16:53,970 [INFO] Skipping bill 1965500 - already processed (253/2605) 2025-12-01 13:16:53,970 [INFO] Skipping bill 1990151 - already processed (254/2605) 2025-12-01 13:16:53,970 [INFO] Skipping bill 1949174 - already processed (255/2605) 2025-12-01 13:16:53,970 [INFO] Skipping bill 1905038 - already processed (256/2605) 2025-12-01 13:16:53,970 [INFO] Skipping bill 1905159 - already processed (257/2605) 2025-12-01 13:16:53,970 [INFO] Skipping bill 1907650 - already processed (258/2605) 2025-12-01 13:16:53,970 [INFO] Skipping bill 1909616 - already processed (259/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1909665 - already processed (260/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1928585 - already processed (261/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1928759 - already processed (262/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1928904 - already processed (263/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1931737 - already processed (264/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1928076 - already processed (265/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1935956 - already processed (266/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1905222 - already processed (267/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1932777 - already processed (268/2605) 2025-12-01 13:16:53,971 [INFO] Skipping bill 1905141 - already processed (269/2605) 2025-12-01 13:16:53,971 [INFO] Processing 270/2605: Bill ID 2034928 2025-12-01 13:16:55,233 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:55,235 [ERROR] Failed to generate report for bill 2034928: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:55,282 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:16:55,282 [INFO] Progress: 270/2605 - Processed: 0, Skipped: 264, Errors: 6 2025-12-01 13:16:56,287 [INFO] Skipping bill 1820947 - already processed (271/2605) 2025-12-01 13:16:56,288 [INFO] Skipping bill 2038143 - already processed (272/2605) 2025-12-01 13:16:56,289 [INFO] Skipping bill 1946119 - already processed (273/2605) 2025-12-01 13:16:56,289 [INFO] Skipping bill 2038726 - already processed (274/2605) 2025-12-01 13:16:56,289 [INFO] Skipping bill 2015494 - already processed (275/2605) 2025-12-01 13:16:56,289 [INFO] Skipping bill 1754732 - already processed (276/2605) 2025-12-01 13:16:56,289 [INFO] Skipping bill 1716623 - already processed (277/2605) 2025-12-01 13:16:56,289 [INFO] Skipping bill 1723029 - already processed (278/2605) 2025-12-01 13:16:56,289 [INFO] Skipping bill 1749221 - already processed (279/2605) 2025-12-01 13:16:56,289 [INFO] Skipping bill 1756757 - already processed (280/2605) 2025-12-01 13:16:56,291 [INFO] Skipping bill 1722774 - already processed (281/2605) 2025-12-01 13:16:56,291 [INFO] Processing 282/2605: Bill ID 1746175 2025-12-01 13:16:57,597 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:57,602 [ERROR] Failed to generate report for bill 1746175: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:16:58,613 [INFO] Skipping bill 1749049 - already processed (283/2605) 2025-12-01 13:16:58,613 [INFO] Skipping bill 1799517 - already processed (284/2605) 2025-12-01 13:16:58,614 [INFO] Skipping bill 1799058 - already processed (285/2605) 2025-12-01 13:16:58,614 [INFO] Skipping bill 1792427 - already processed (286/2605) 2025-12-01 13:16:58,614 [INFO] Skipping bill 1791537 - already processed (287/2605) 2025-12-01 13:16:58,614 [INFO] Skipping bill 1793699 - already processed (288/2605) 2025-12-01 13:16:58,614 [INFO] Skipping bill 1784035 - already processed (289/2605) 2025-12-01 13:16:58,614 [INFO] Skipping bill 1789608 - already processed (290/2605) 2025-12-01 13:16:58,614 [INFO] Skipping bill 1797287 - already processed (291/2605) 2025-12-01 13:16:58,615 [INFO] Skipping bill 1799146 - already processed (292/2605) 2025-12-01 13:16:58,615 [INFO] Skipping bill 1799256 - already processed (293/2605) 2025-12-01 13:16:58,615 [INFO] Skipping bill 1799530 - already processed (294/2605) 2025-12-01 13:16:58,615 [INFO] Skipping bill 1799073 - already processed (295/2605) 2025-12-01 13:16:58,615 [INFO] Skipping bill 1798525 - already processed (296/2605) 2025-12-01 13:16:58,615 [INFO] Skipping bill 1812862 - already processed (297/2605) 2025-12-01 13:16:58,615 [INFO] Skipping bill 1799556 - already processed (298/2605) 2025-12-01 13:16:58,615 [INFO] Skipping bill 1793796 - already processed (299/2605) 2025-12-01 13:16:58,616 [INFO] Skipping bill 1840899 - already processed (300/2605) 2025-12-01 13:16:58,616 [INFO] Skipping bill 1849855 - already processed (301/2605) 2025-12-01 13:16:58,616 [INFO] Skipping bill 1796581 - already processed (302/2605) 2025-12-01 13:16:58,616 [INFO] Skipping bill 1785974 - already processed (303/2605) 2025-12-01 13:16:58,616 [INFO] Skipping bill 1799599 - already processed (304/2605) 2025-12-01 13:16:58,616 [INFO] Skipping bill 1799188 - already processed (305/2605) 2025-12-01 13:16:58,616 [INFO] Skipping bill 1834738 - already processed (306/2605) 2025-12-01 13:16:58,616 [INFO] Skipping bill 1799528 - already processed (307/2605) 2025-12-01 13:16:58,616 [INFO] Processing 308/2605: Bill ID 1829539 2025-12-01 13:16:59,920 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:16:59,922 [ERROR] Failed to generate report for bill 1829539: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:00,931 [INFO] Skipping bill 1953506 - already processed (309/2605) 2025-12-01 13:17:00,932 [INFO] Skipping bill 1969171 - already processed (310/2605) 2025-12-01 13:17:00,932 [INFO] Skipping bill 1963529 - already processed (311/2605) 2025-12-01 13:17:00,932 [INFO] Skipping bill 1973172 - already processed (312/2605) 2025-12-01 13:17:00,932 [INFO] Skipping bill 1977164 - already processed (313/2605) 2025-12-01 13:17:00,932 [INFO] Skipping bill 1984764 - already processed (314/2605) 2025-12-01 13:17:00,933 [INFO] Skipping bill 1988421 - already processed (315/2605) 2025-12-01 13:17:00,933 [INFO] Skipping bill 1963407 - already processed (316/2605) 2025-12-01 13:17:00,933 [INFO] Skipping bill 1977647 - already processed (317/2605) 2025-12-01 13:17:00,933 [INFO] Skipping bill 1985537 - already processed (318/2605) 2025-12-01 13:17:00,933 [INFO] Skipping bill 1988809 - already processed (319/2605) 2025-12-01 13:17:00,933 [INFO] Skipping bill 1989241 - already processed (320/2605) 2025-12-01 13:17:00,934 [INFO] Skipping bill 1980688 - already processed (321/2605) 2025-12-01 13:17:00,935 [INFO] Skipping bill 1985490 - already processed (322/2605) 2025-12-01 13:17:00,935 [INFO] Skipping bill 1987236 - already processed (323/2605) 2025-12-01 13:17:00,935 [INFO] Skipping bill 2009168 - already processed (324/2605) 2025-12-01 13:17:00,935 [INFO] Skipping bill 1985684 - already processed (325/2605) 2025-12-01 13:17:00,936 [INFO] Skipping bill 1982957 - already processed (326/2605) 2025-12-01 13:17:00,936 [INFO] Skipping bill 2009660 - already processed (327/2605) 2025-12-01 13:17:00,936 [INFO] Skipping bill 1987290 - already processed (328/2605) 2025-12-01 13:17:00,936 [INFO] Skipping bill 2021527 - already processed (329/2605) 2025-12-01 13:17:00,936 [INFO] Skipping bill 1984006 - already processed (330/2605) 2025-12-01 13:17:00,936 [INFO] Skipping bill 1944378 - already processed (331/2605) 2025-12-01 13:17:00,936 [INFO] Processing 332/2605: Bill ID 2016312 2025-12-01 13:17:02,292 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:02,294 [ERROR] Failed to generate report for bill 2016312: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:03,303 [INFO] Skipping bill 1975511 - already processed (333/2605) 2025-12-01 13:17:03,304 [INFO] Skipping bill 1807866 - already processed (334/2605) 2025-12-01 13:17:03,304 [INFO] Skipping bill 1825040 - already processed (335/2605) 2025-12-01 13:17:03,304 [INFO] Skipping bill 1824663 - already processed (336/2605) 2025-12-01 13:17:03,304 [INFO] Skipping bill 1827759 - already processed (337/2605) 2025-12-01 13:17:03,304 [INFO] Skipping bill 1807849 - already processed (338/2605) 2025-12-01 13:17:03,305 [INFO] Skipping bill 1852469 - already processed (339/2605) 2025-12-01 13:17:03,305 [INFO] Skipping bill 1724818 - already processed (340/2605) 2025-12-01 13:17:03,305 [INFO] Skipping bill 1827801 - already processed (341/2605) 2025-12-01 13:17:03,305 [INFO] Skipping bill 1842042 - already processed (342/2605) 2025-12-01 13:17:03,305 [INFO] Skipping bill 1800509 - already processed (343/2605) 2025-12-01 13:17:03,305 [INFO] Skipping bill 1829048 - already processed (344/2605) 2025-12-01 13:17:03,306 [INFO] Skipping bill 1691393 - already processed (345/2605) 2025-12-01 13:17:03,306 [INFO] Skipping bill 1684843 - already processed (346/2605) 2025-12-01 13:17:03,306 [INFO] Skipping bill 1945161 - already processed (347/2605) 2025-12-01 13:17:03,306 [INFO] Skipping bill 1947679 - already processed (348/2605) 2025-12-01 13:17:03,306 [INFO] Skipping bill 1943273 - already processed (349/2605) 2025-12-01 13:17:03,306 [INFO] Skipping bill 1919150 - already processed (350/2605) 2025-12-01 13:17:03,306 [INFO] Skipping bill 2012228 - already processed (351/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1990355 - already processed (352/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1960995 - already processed (353/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1968119 - already processed (354/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 2006978 - already processed (355/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974144 - already processed (356/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974243 - already processed (357/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974425 - already processed (358/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 2016144 - already processed (359/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974177 - already processed (360/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974222 - already processed (361/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974239 - already processed (362/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974292 - already processed (363/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974356 - already processed (364/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974381 - already processed (365/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974418 - already processed (366/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1990318 - already processed (367/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1987837 - already processed (368/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1974421 - already processed (369/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1982057 - already processed (370/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1968164 - already processed (371/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1979990 - already processed (372/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1961023 - already processed (373/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1970366 - already processed (374/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1976266 - already processed (375/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1735435 - already processed (376/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1735103 - already processed (377/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1735239 - already processed (378/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1676639 - already processed (379/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1822936 - already processed (380/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1824099 - already processed (381/2605) 2025-12-01 13:17:03,307 [INFO] Skipping bill 1823066 - already processed (382/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1821100 - already processed (383/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1821376 - already processed (384/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1861884 - already processed (385/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1862091 - already processed (386/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1824408 - already processed (387/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1823094 - already processed (388/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1859976 - already processed (389/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1860020 - already processed (390/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1822457 - already processed (391/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1823240 - already processed (392/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1822425 - already processed (393/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1823305 - already processed (394/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1816605 - already processed (395/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1822519 - already processed (396/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1822760 - already processed (397/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1821542 - already processed (398/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1862395 - already processed (399/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1862180 - already processed (400/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1820992 - already processed (401/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1822908 - already processed (402/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1816124 - already processed (403/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1826161 - already processed (404/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1822451 - already processed (405/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1823328 - already processed (406/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1860844 - already processed (407/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1819671 - already processed (408/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1815658 - already processed (409/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1929168 - already processed (410/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1939103 - already processed (411/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1939150 - already processed (412/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1924410 - already processed (413/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1929804 - already processed (414/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1929561 - already processed (415/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1925992 - already processed (416/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1928926 - already processed (417/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1931961 - already processed (418/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1929636 - already processed (419/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1909994 - already processed (420/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1928408 - already processed (421/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1928598 - already processed (422/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1994243 - already processed (423/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1994303 - already processed (424/2605) 2025-12-01 13:17:03,308 [INFO] Skipping bill 1929659 - already processed (425/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1932766 - already processed (426/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1928570 - already processed (427/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1934608 - already processed (428/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1928364 - already processed (429/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1929760 - already processed (430/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1933272 - already processed (431/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1929496 - already processed (432/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1990347 - already processed (433/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1995251 - already processed (434/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1995449 - already processed (435/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1995259 - already processed (436/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1995271 - already processed (437/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1995747 - already processed (438/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1991557 - already processed (439/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1991563 - already processed (440/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1995783 - already processed (441/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1929457 - already processed (442/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1915997 - already processed (443/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1933178 - already processed (444/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1992758 - already processed (445/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1993026 - already processed (446/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1995569 - already processed (447/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1992805 - already processed (448/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1995900 - already processed (449/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1993019 - already processed (450/2605) 2025-12-01 13:17:03,309 [INFO] Skipping bill 1847870 - already processed (451/2605) 2025-12-01 13:17:03,310 [INFO] Skipping bill 1812600 - already processed (452/2605) 2025-12-01 13:17:03,310 [INFO] Skipping bill 1848008 - already processed (453/2605) 2025-12-01 13:17:03,310 [INFO] Skipping bill 1825516 - already processed (454/2605) 2025-12-01 13:17:03,310 [INFO] Processing 455/2605: Bill ID 1845026 2025-12-01 13:17:03,828 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:03,831 [ERROR] Failed to generate report for bill 1845026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:04,840 [INFO] Skipping bill 1962312 - already processed (456/2605) 2025-12-01 13:17:04,841 [INFO] Skipping bill 1954011 - already processed (457/2605) 2025-12-01 13:17:04,841 [INFO] Skipping bill 1991380 - already processed (458/2605) 2025-12-01 13:17:04,841 [INFO] Processing 459/2605: Bill ID 2011846 2025-12-01 13:17:05,230 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:05,232 [ERROR] Failed to generate report for bill 2011846: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:06,240 [INFO] Skipping bill 1838778 - already processed (460/2605) 2025-12-01 13:17:06,241 [INFO] Skipping bill 1713666 - already processed (461/2605) 2025-12-01 13:17:06,241 [INFO] Skipping bill 1837146 - already processed (462/2605) 2025-12-01 13:17:06,241 [INFO] Skipping bill 1842401 - already processed (463/2605) 2025-12-01 13:17:06,241 [INFO] Skipping bill 1838992 - already processed (464/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1840748 - already processed (465/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1841780 - already processed (466/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1831504 - already processed (467/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1832905 - already processed (468/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1843072 - already processed (469/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1839869 - already processed (470/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1814012 - already processed (471/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1842520 - already processed (472/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1835262 - already processed (473/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1843020 - already processed (474/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1878243 - already processed (475/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1893072 - already processed (476/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1713755 - already processed (477/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1842316 - already processed (478/2605) 2025-12-01 13:17:06,242 [INFO] Skipping bill 1838852 - already processed (479/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1838748 - already processed (480/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1635340 - already processed (481/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1713127 - already processed (482/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1818470 - already processed (483/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1837189 - already processed (484/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1635556 - already processed (485/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1692465 - already processed (486/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1843326 - already processed (487/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1822203 - already processed (488/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1838434 - already processed (489/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1714042 - already processed (490/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1840824 - already processed (491/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1810043 - already processed (492/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1762665 - already processed (493/2605) 2025-12-01 13:17:06,243 [INFO] Skipping bill 1831619 - already processed (494/2605) 2025-12-01 13:17:06,244 [INFO] Skipping bill 1712988 - already processed (495/2605) 2025-12-01 13:17:06,244 [INFO] Skipping bill 1704077 - already processed (496/2605) 2025-12-01 13:17:06,244 [INFO] Skipping bill 1712903 - already processed (497/2605) 2025-12-01 13:17:06,244 [INFO] Skipping bill 1818714 - already processed (498/2605) 2025-12-01 13:17:06,244 [INFO] Skipping bill 1842743 - already processed (499/2605) 2025-12-01 13:17:06,244 [INFO] Processing 500/2605: Bill ID 1838518 2025-12-01 13:17:08,589 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:08,592 [ERROR] Failed to generate report for bill 1838518: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:08,670 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:17:08,676 [INFO] Progress: 500/2605 - Processed: 0, Skipped: 488, Errors: 12 2025-12-01 13:17:09,681 [INFO] Processing 501/2605: Bill ID 1794181 2025-12-01 13:17:10,215 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:10,218 [ERROR] Failed to generate report for bill 1794181: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:11,228 [INFO] Processing 502/2605: Bill ID 1708593 2025-12-01 13:17:11,718 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:11,721 [ERROR] Failed to generate report for bill 1708593: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:12,731 [INFO] Processing 503/2605: Bill ID 1704148 2025-12-01 13:17:15,000 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:15,004 [ERROR] Failed to generate report for bill 1704148: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:16,012 [INFO] Processing 504/2605: Bill ID 1704278 2025-12-01 13:17:18,378 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:18,381 [ERROR] Failed to generate report for bill 1704278: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:19,390 [INFO] Skipping bill 1714051 - already processed (505/2605) 2025-12-01 13:17:19,391 [INFO] Skipping bill 1951980 - already processed (506/2605) 2025-12-01 13:17:19,391 [INFO] Skipping bill 1942546 - already processed (507/2605) 2025-12-01 13:17:19,391 [INFO] Skipping bill 1954662 - already processed (508/2605) 2025-12-01 13:17:19,392 [INFO] Skipping bill 1962278 - already processed (509/2605) 2025-12-01 13:17:19,392 [INFO] Skipping bill 1959604 - already processed (510/2605) 2025-12-01 13:17:19,392 [INFO] Skipping bill 1961963 - already processed (511/2605) 2025-12-01 13:17:19,392 [INFO] Skipping bill 1906420 - already processed (512/2605) 2025-12-01 13:17:19,392 [INFO] Skipping bill 1959700 - already processed (513/2605) 2025-12-01 13:17:19,392 [INFO] Skipping bill 1960223 - already processed (514/2605) 2025-12-01 13:17:19,392 [INFO] Skipping bill 1955104 - already processed (515/2605) 2025-12-01 13:17:19,393 [INFO] Skipping bill 1962582 - already processed (516/2605) 2025-12-01 13:17:19,393 [INFO] Skipping bill 1945671 - already processed (517/2605) 2025-12-01 13:17:19,393 [INFO] Skipping bill 1927329 - already processed (518/2605) 2025-12-01 13:17:19,393 [INFO] Skipping bill 1950703 - already processed (519/2605) 2025-12-01 13:17:19,393 [INFO] Skipping bill 1962488 - already processed (520/2605) 2025-12-01 13:17:19,393 [INFO] Skipping bill 1945525 - already processed (521/2605) 2025-12-01 13:17:19,393 [INFO] Skipping bill 1958920 - already processed (522/2605) 2025-12-01 13:17:19,394 [INFO] Skipping bill 1962097 - already processed (523/2605) 2025-12-01 13:17:19,394 [INFO] Skipping bill 1963192 - already processed (524/2605) 2025-12-01 13:17:19,394 [INFO] Skipping bill 1947169 - already processed (525/2605) 2025-12-01 13:17:19,394 [INFO] Skipping bill 1961929 - already processed (526/2605) 2025-12-01 13:17:19,394 [INFO] Skipping bill 1962057 - already processed (527/2605) 2025-12-01 13:17:19,394 [INFO] Skipping bill 1973797 - already processed (528/2605) 2025-12-01 13:17:19,394 [INFO] Skipping bill 1963087 - already processed (529/2605) 2025-12-01 13:17:19,394 [INFO] Skipping bill 1940139 - already processed (530/2605) 2025-12-01 13:17:19,395 [INFO] Skipping bill 1941211 - already processed (531/2605) 2025-12-01 13:17:19,395 [INFO] Skipping bill 1906434 - already processed (532/2605) 2025-12-01 13:17:19,395 [INFO] Skipping bill 1963178 - already processed (533/2605) 2025-12-01 13:17:19,395 [INFO] Skipping bill 1954188 - already processed (534/2605) 2025-12-01 13:17:19,395 [INFO] Skipping bill 1954475 - already processed (535/2605) 2025-12-01 13:17:19,395 [INFO] Skipping bill 1957381 - already processed (536/2605) 2025-12-01 13:17:19,396 [INFO] Skipping bill 1962329 - already processed (537/2605) 2025-12-01 13:17:19,396 [INFO] Skipping bill 1962675 - already processed (538/2605) 2025-12-01 13:17:19,396 [INFO] Skipping bill 1935756 - already processed (539/2605) 2025-12-01 13:17:19,396 [INFO] Skipping bill 1945467 - already processed (540/2605) 2025-12-01 13:17:19,396 [INFO] Skipping bill 1907066 - already processed (541/2605) 2025-12-01 13:17:19,396 [INFO] Skipping bill 1985138 - already processed (542/2605) 2025-12-01 13:17:19,397 [INFO] Skipping bill 1961501 - already processed (543/2605) 2025-12-01 13:17:19,397 [INFO] Skipping bill 1962291 - already processed (544/2605) 2025-12-01 13:17:19,397 [INFO] Skipping bill 2034790 - already processed (545/2605) 2025-12-01 13:17:19,397 [INFO] Skipping bill 2047690 - already processed (546/2605) 2025-12-01 13:17:19,397 [INFO] Skipping bill 2052256 - already processed (547/2605) 2025-12-01 13:17:19,397 [INFO] Skipping bill 1962885 - already processed (548/2605) 2025-12-01 13:17:19,397 [INFO] Skipping bill 1960413 - already processed (549/2605) 2025-12-01 13:17:19,397 [INFO] Skipping bill 1959956 - already processed (550/2605) 2025-12-01 13:17:19,398 [INFO] Processing 551/2605: Bill ID 1962986 2025-12-01 13:17:22,371 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:22,374 [ERROR] Failed to generate report for bill 1962986: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:23,384 [INFO] Processing 552/2605: Bill ID 1960510 2025-12-01 13:17:24,013 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:24,014 [ERROR] Failed to generate report for bill 1960510: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:25,024 [INFO] Skipping bill 1962952 - already processed (553/2605) 2025-12-01 13:17:25,025 [INFO] Processing 554/2605: Bill ID 1645841 2025-12-01 13:17:25,689 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:25,691 [ERROR] Failed to generate report for bill 1645841: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:26,701 [INFO] Skipping bill 1799709 - already processed (555/2605) 2025-12-01 13:17:26,702 [INFO] Skipping bill 1797422 - already processed (556/2605) 2025-12-01 13:17:26,702 [INFO] Skipping bill 1801018 - already processed (557/2605) 2025-12-01 13:17:26,702 [INFO] Skipping bill 1799688 - already processed (558/2605) 2025-12-01 13:17:26,702 [INFO] Skipping bill 1909475 - already processed (559/2605) 2025-12-01 13:17:26,703 [INFO] Skipping bill 1921138 - already processed (560/2605) 2025-12-01 13:17:26,703 [INFO] Skipping bill 1917007 - already processed (561/2605) 2025-12-01 13:17:26,703 [INFO] Skipping bill 1921879 - already processed (562/2605) 2025-12-01 13:17:26,703 [INFO] Skipping bill 1915249 - already processed (563/2605) 2025-12-01 13:17:26,703 [INFO] Skipping bill 1912345 - already processed (564/2605) 2025-12-01 13:17:26,703 [INFO] Processing 565/2605: Bill ID 1897676 2025-12-01 13:17:27,229 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:27,232 [ERROR] Failed to generate report for bill 1897676: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:28,236 [INFO] Skipping bill 1847772 - already processed (566/2605) 2025-12-01 13:17:28,236 [INFO] Skipping bill 1825218 - already processed (567/2605) 2025-12-01 13:17:28,236 [INFO] Skipping bill 1839463 - already processed (568/2605) 2025-12-01 13:17:28,236 [INFO] Skipping bill 1665194 - already processed (569/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1708118 - already processed (570/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1802090 - already processed (571/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1823725 - already processed (572/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1845657 - already processed (573/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1846612 - already processed (574/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1870077 - already processed (575/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1870897 - already processed (576/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1761153 - already processed (577/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1760883 - already processed (578/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1752922 - already processed (579/2605) 2025-12-01 13:17:28,237 [INFO] Skipping bill 1873484 - already processed (580/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1990915 - already processed (581/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1969038 - already processed (582/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1993838 - already processed (583/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1958795 - already processed (584/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1977734 - already processed (585/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1937592 - already processed (586/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1963811 - already processed (587/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 2029033 - already processed (588/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 2026836 - already processed (589/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 2027180 - already processed (590/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 2021349 - already processed (591/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 2030059 - already processed (592/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1823829 - already processed (593/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1824037 - already processed (594/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1850989 - already processed (595/2605) 2025-12-01 13:17:28,238 [INFO] Skipping bill 1826921 - already processed (596/2605) 2025-12-01 13:17:28,239 [INFO] Skipping bill 1690087 - already processed (597/2605) 2025-12-01 13:17:28,239 [INFO] Processing 598/2605: Bill ID 1693524 2025-12-01 13:17:28,925 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:28,928 [ERROR] Failed to generate report for bill 1693524: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:29,938 [INFO] Skipping bill 1665637 - already processed (599/2605) 2025-12-01 13:17:29,939 [INFO] Skipping bill 1682635 - already processed (600/2605) 2025-12-01 13:17:29,940 [INFO] Processing 601/2605: Bill ID 1692213 2025-12-01 13:17:30,666 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:30,669 [ERROR] Failed to generate report for bill 1692213: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:31,678 [INFO] Processing 602/2605: Bill ID 1846626 2025-12-01 13:17:32,511 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:32,514 [ERROR] Failed to generate report for bill 1846626: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:33,524 [INFO] Processing 603/2605: Bill ID 1846675 2025-12-01 13:17:34,229 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:34,230 [ERROR] Failed to generate report for bill 1846675: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:35,239 [INFO] Skipping bill 1653927 - already processed (604/2605) 2025-12-01 13:17:35,239 [INFO] Skipping bill 1959326 - already processed (605/2605) 2025-12-01 13:17:35,239 [INFO] Skipping bill 1948632 - already processed (606/2605) 2025-12-01 13:17:35,239 [INFO] Skipping bill 1955060 - already processed (607/2605) 2025-12-01 13:17:35,239 [INFO] Skipping bill 1946546 - already processed (608/2605) 2025-12-01 13:17:35,239 [INFO] Processing 609/2605: Bill ID 1916487 2025-12-01 13:17:36,195 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:36,196 [ERROR] Failed to generate report for bill 1916487: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:37,205 [INFO] Skipping bill 1949165 - already processed (610/2605) 2025-12-01 13:17:37,206 [INFO] Processing 611/2605: Bill ID 1938020 2025-12-01 13:17:38,141 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:38,144 [ERROR] Failed to generate report for bill 1938020: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:39,157 [INFO] Processing 612/2605: Bill ID 1937464 2025-12-01 13:17:39,905 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:39,908 [ERROR] Failed to generate report for bill 1937464: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:40,920 [INFO] Processing 613/2605: Bill ID 1713253 2025-12-01 13:17:41,623 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:41,626 [ERROR] Failed to generate report for bill 1713253: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:42,637 [INFO] Skipping bill 1804283 - already processed (614/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1795473 - already processed (615/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1855405 - already processed (616/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1848823 - already processed (617/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1842483 - already processed (618/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1854786 - already processed (619/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1795485 - already processed (620/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1854739 - already processed (621/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1799043 - already processed (622/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1974284 - already processed (623/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1974163 - already processed (624/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1994222 - already processed (625/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1970124 - already processed (626/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1908054 - already processed (627/2605) 2025-12-01 13:17:42,637 [INFO] Skipping bill 1904666 - already processed (628/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1975714 - already processed (629/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1974214 - already processed (630/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1765786 - already processed (631/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1751941 - already processed (632/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1747213 - already processed (633/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1872579 - already processed (634/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1831630 - already processed (635/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1869553 - already processed (636/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1856482 - already processed (637/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1877177 - already processed (638/2605) 2025-12-01 13:17:42,638 [INFO] Skipping bill 1856535 - already processed (639/2605) 2025-12-01 13:17:42,638 [INFO] Processing 640/2605: Bill ID 1856106 2025-12-01 13:17:43,067 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:43,069 [ERROR] Failed to generate report for bill 1856106: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:43,120 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:17:43,120 [INFO] Progress: 640/2605 - Processed: 0, Skipped: 611, Errors: 29 2025-12-01 13:17:44,125 [INFO] Skipping bill 2036140 - already processed (641/2605) 2025-12-01 13:17:44,126 [INFO] Skipping bill 2013841 - already processed (642/2605) 2025-12-01 13:17:44,126 [INFO] Skipping bill 2036152 - already processed (643/2605) 2025-12-01 13:17:44,126 [INFO] Skipping bill 2035054 - already processed (644/2605) 2025-12-01 13:17:44,126 [INFO] Skipping bill 2020836 - already processed (645/2605) 2025-12-01 13:17:44,126 [INFO] Skipping bill 2034414 - already processed (646/2605) 2025-12-01 13:17:44,126 [INFO] Skipping bill 2036147 - already processed (647/2605) 2025-12-01 13:17:44,127 [INFO] Skipping bill 2017245 - already processed (648/2605) 2025-12-01 13:17:44,127 [INFO] Processing 649/2605: Bill ID 2020366 2025-12-01 13:17:44,696 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:44,699 [ERROR] Failed to generate report for bill 2020366: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:45,709 [INFO] Skipping bill 1754734 - already processed (650/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 1766525 - already processed (651/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 1993701 - already processed (652/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 2024454 - already processed (653/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 1989654 - already processed (654/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 1923257 - already processed (655/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 2012930 - already processed (656/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 2022043 - already processed (657/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 1977885 - already processed (658/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 1903898 - already processed (659/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 2022085 - already processed (660/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 2024471 - already processed (661/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 1962449 - already processed (662/2605) 2025-12-01 13:17:45,710 [INFO] Skipping bill 1948585 - already processed (663/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 2027763 - already processed (664/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 2038183 - already processed (665/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 2012908 - already processed (666/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1703457 - already processed (667/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1703326 - already processed (668/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1703583 - already processed (669/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1703488 - already processed (670/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1694229 - already processed (671/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1697293 - already processed (672/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1694179 - already processed (673/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1707790 - already processed (674/2605) 2025-12-01 13:17:45,711 [INFO] Skipping bill 1691409 - already processed (675/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1679149 - already processed (676/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1697468 - already processed (677/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1703148 - already processed (678/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1835739 - already processed (679/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1840482 - already processed (680/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1842215 - already processed (681/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1838035 - already processed (682/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1842106 - already processed (683/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1839236 - already processed (684/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1839142 - already processed (685/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1838028 - already processed (686/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1837867 - already processed (687/2605) 2025-12-01 13:17:45,712 [INFO] Skipping bill 1835606 - already processed (688/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1825025 - already processed (689/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1826297 - already processed (690/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1847549 - already processed (691/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1839307 - already processed (692/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1842129 - already processed (693/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1837909 - already processed (694/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1797714 - already processed (695/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1839204 - already processed (696/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1835710 - already processed (697/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1837838 - already processed (698/2605) 2025-12-01 13:17:45,713 [INFO] Skipping bill 1837893 - already processed (699/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1835695 - already processed (700/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1837995 - already processed (701/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1842172 - already processed (702/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1817737 - already processed (703/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1953268 - already processed (704/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1961326 - already processed (705/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1961123 - already processed (706/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1953218 - already processed (707/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1945231 - already processed (708/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1949851 - already processed (709/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1945281 - already processed (710/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1945285 - already processed (711/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1949794 - already processed (712/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1949746 - already processed (713/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1949835 - already processed (714/2605) 2025-12-01 13:17:45,714 [INFO] Skipping bill 1961190 - already processed (715/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1953113 - already processed (716/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1936713 - already processed (717/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1939378 - already processed (718/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1909925 - already processed (719/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1961341 - already processed (720/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1922403 - already processed (721/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1899660 - already processed (722/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1961327 - already processed (723/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1953223 - already processed (724/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1953246 - already processed (725/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1955835 - already processed (726/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1933617 - already processed (727/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1945335 - already processed (728/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1961410 - already processed (729/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1926508 - already processed (730/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1943426 - already processed (731/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1949808 - already processed (732/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1949848 - already processed (733/2605) 2025-12-01 13:17:45,715 [INFO] Skipping bill 1947517 - already processed (734/2605) 2025-12-01 13:17:45,716 [INFO] Skipping bill 1945267 - already processed (735/2605) 2025-12-01 13:17:45,716 [INFO] Skipping bill 1961205 - already processed (736/2605) 2025-12-01 13:17:45,716 [INFO] Skipping bill 1953214 - already processed (737/2605) 2025-12-01 13:17:45,716 [INFO] Skipping bill 1943446 - already processed (738/2605) 2025-12-01 13:17:45,716 [INFO] Skipping bill 1973042 - already processed (739/2605) 2025-12-01 13:17:45,716 [INFO] Skipping bill 1961299 - already processed (740/2605) 2025-12-01 13:17:45,716 [INFO] Skipping bill 1933601 - already processed (741/2605) 2025-12-01 13:17:45,716 [INFO] Skipping bill 1933621 - already processed (742/2605) 2025-12-01 13:17:45,716 [INFO] Processing 743/2605: Bill ID 1919287 2025-12-01 13:17:46,128 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:46,129 [ERROR] Failed to generate report for bill 1919287: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:47,138 [INFO] Skipping bill 1933460 - already processed (744/2605) 2025-12-01 13:17:47,139 [INFO] Skipping bill 1933670 - already processed (745/2605) 2025-12-01 13:17:47,139 [INFO] Skipping bill 1922377 - already processed (746/2605) 2025-12-01 13:17:47,139 [INFO] Skipping bill 1735361 - already processed (747/2605) 2025-12-01 13:17:47,140 [INFO] Skipping bill 1742559 - already processed (748/2605) 2025-12-01 13:17:47,140 [INFO] Skipping bill 1775856 - already processed (749/2605) 2025-12-01 13:17:47,140 [INFO] Skipping bill 1738097 - already processed (750/2605) 2025-12-01 13:17:47,140 [INFO] Skipping bill 1794760 - already processed (751/2605) 2025-12-01 13:17:47,140 [INFO] Skipping bill 1736131 - already processed (752/2605) 2025-12-01 13:17:47,140 [INFO] Skipping bill 1885778 - already processed (753/2605) 2025-12-01 13:17:47,140 [INFO] Skipping bill 1808592 - already processed (754/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1878825 - already processed (755/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1884638 - already processed (756/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1738996 - already processed (757/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1878228 - already processed (758/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1872865 - already processed (759/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1881167 - already processed (760/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1881743 - already processed (761/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1852772 - already processed (762/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1884104 - already processed (763/2605) 2025-12-01 13:17:47,141 [INFO] Skipping bill 1738794 - already processed (764/2605) 2025-12-01 13:17:47,142 [INFO] Skipping bill 1893080 - already processed (765/2605) 2025-12-01 13:17:47,142 [INFO] Skipping bill 1881922 - already processed (766/2605) 2025-12-01 13:17:47,142 [INFO] Skipping bill 1883178 - already processed (767/2605) 2025-12-01 13:17:47,142 [INFO] Skipping bill 1881587 - already processed (768/2605) 2025-12-01 13:17:47,142 [INFO] Skipping bill 1884487 - already processed (769/2605) 2025-12-01 13:17:47,142 [INFO] Skipping bill 1859182 - already processed (770/2605) 2025-12-01 13:17:47,142 [INFO] Skipping bill 1866861 - already processed (771/2605) 2025-12-01 13:17:47,142 [INFO] Processing 772/2605: Bill ID 1891836 2025-12-01 13:17:47,663 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:47,666 [ERROR] Failed to generate report for bill 1891836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:48,674 [INFO] Skipping bill 1883738 - already processed (773/2605) 2025-12-01 13:17:48,675 [INFO] Skipping bill 1682652 - already processed (774/2605) 2025-12-01 13:17:48,675 [INFO] Skipping bill 1742464 - already processed (775/2605) 2025-12-01 13:17:48,675 [INFO] Skipping bill 1728366 - already processed (776/2605) 2025-12-01 13:17:48,676 [INFO] Skipping bill 1726524 - already processed (777/2605) 2025-12-01 13:17:48,676 [INFO] Skipping bill 1737208 - already processed (778/2605) 2025-12-01 13:17:48,676 [INFO] Skipping bill 1749398 - already processed (779/2605) 2025-12-01 13:17:48,676 [INFO] Skipping bill 1738008 - already processed (780/2605) 2025-12-01 13:17:48,676 [INFO] Skipping bill 1735894 - already processed (781/2605) 2025-12-01 13:17:48,676 [INFO] Skipping bill 1841416 - already processed (782/2605) 2025-12-01 13:17:48,677 [INFO] Skipping bill 1736739 - already processed (783/2605) 2025-12-01 13:17:48,677 [INFO] Skipping bill 1737586 - already processed (784/2605) 2025-12-01 13:17:48,677 [INFO] Skipping bill 1884557 - already processed (785/2605) 2025-12-01 13:17:48,677 [INFO] Processing 786/2605: Bill ID 1875094 2025-12-01 13:17:49,713 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:49,716 [ERROR] Failed to generate report for bill 1875094: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:50,727 [INFO] Processing 787/2605: Bill ID 1755026 2025-12-01 13:17:51,556 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:51,559 [ERROR] Failed to generate report for bill 1755026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:52,571 [INFO] Processing 788/2605: Bill ID 1871591 2025-12-01 13:17:53,502 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:53,505 [ERROR] Failed to generate report for bill 1871591: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:54,515 [INFO] Processing 789/2605: Bill ID 1760451 2025-12-01 13:17:55,756 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:55,758 [ERROR] Failed to generate report for bill 1760451: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:56,768 [INFO] Processing 790/2605: Bill ID 1880948 2025-12-01 13:17:57,802 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:57,806 [ERROR] Failed to generate report for bill 1880948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:17:57,860 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:17:57,860 [INFO] Progress: 790/2605 - Processed: 0, Skipped: 753, Errors: 37 2025-12-01 13:17:58,865 [INFO] Processing 791/2605: Bill ID 1775764 2025-12-01 13:17:59,851 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:17:59,854 [ERROR] Failed to generate report for bill 1775764: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:00,862 [INFO] Processing 792/2605: Bill ID 1884634 2025-12-01 13:18:01,975 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:01,977 [ERROR] Failed to generate report for bill 1884634: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:02,989 [INFO] Skipping bill 2000828 - already processed (793/2605) 2025-12-01 13:18:02,990 [INFO] Skipping bill 2001551 - already processed (794/2605) 2025-12-01 13:18:02,990 [INFO] Skipping bill 1997130 - already processed (795/2605) 2025-12-01 13:18:02,990 [INFO] Skipping bill 2046647 - already processed (796/2605) 2025-12-01 13:18:02,990 [INFO] Skipping bill 2004206 - already processed (797/2605) 2025-12-01 13:18:02,990 [INFO] Skipping bill 1998184 - already processed (798/2605) 2025-12-01 13:18:02,990 [INFO] Skipping bill 2002506 - already processed (799/2605) 2025-12-01 13:18:02,991 [INFO] Skipping bill 2002695 - already processed (800/2605) 2025-12-01 13:18:02,991 [INFO] Skipping bill 2047070 - already processed (801/2605) 2025-12-01 13:18:02,991 [INFO] Skipping bill 2002923 - already processed (802/2605) 2025-12-01 13:18:02,991 [INFO] Skipping bill 1998946 - already processed (803/2605) 2025-12-01 13:18:02,991 [INFO] Skipping bill 1997259 - already processed (804/2605) 2025-12-01 13:18:02,992 [INFO] Skipping bill 2001269 - already processed (805/2605) 2025-12-01 13:18:02,992 [INFO] Skipping bill 2000625 - already processed (806/2605) 2025-12-01 13:18:02,992 [INFO] Skipping bill 2002705 - already processed (807/2605) 2025-12-01 13:18:02,992 [INFO] Skipping bill 2046676 - already processed (808/2605) 2025-12-01 13:18:02,992 [INFO] Skipping bill 2046660 - already processed (809/2605) 2025-12-01 13:18:02,992 [INFO] Skipping bill 2003933 - already processed (810/2605) 2025-12-01 13:18:02,993 [INFO] Skipping bill 1997268 - already processed (811/2605) 2025-12-01 13:18:02,993 [INFO] Skipping bill 2019724 - already processed (812/2605) 2025-12-01 13:18:02,993 [INFO] Skipping bill 1997990 - already processed (813/2605) 2025-12-01 13:18:02,993 [INFO] Skipping bill 1998675 - already processed (814/2605) 2025-12-01 13:18:02,993 [INFO] Skipping bill 2002243 - already processed (815/2605) 2025-12-01 13:18:02,993 [INFO] Skipping bill 1997584 - already processed (816/2605) 2025-12-01 13:18:02,993 [INFO] Skipping bill 2002929 - already processed (817/2605) 2025-12-01 13:18:02,994 [INFO] Skipping bill 2001175 - already processed (818/2605) 2025-12-01 13:18:02,994 [INFO] Skipping bill 1998815 - already processed (819/2605) 2025-12-01 13:18:02,994 [INFO] Skipping bill 1998575 - already processed (820/2605) 2025-12-01 13:18:02,994 [INFO] Skipping bill 1999210 - already processed (821/2605) 2025-12-01 13:18:02,994 [INFO] Skipping bill 2001320 - already processed (822/2605) 2025-12-01 13:18:02,995 [INFO] Skipping bill 2053304 - already processed (823/2605) 2025-12-01 13:18:02,995 [INFO] Skipping bill 2001993 - already processed (824/2605) 2025-12-01 13:18:02,995 [INFO] Skipping bill 1999288 - already processed (825/2605) 2025-12-01 13:18:02,995 [INFO] Skipping bill 1998331 - already processed (826/2605) 2025-12-01 13:18:02,995 [INFO] Skipping bill 2003746 - already processed (827/2605) 2025-12-01 13:18:02,995 [INFO] Skipping bill 1927181 - already processed (828/2605) 2025-12-01 13:18:02,995 [INFO] Skipping bill 2030259 - already processed (829/2605) 2025-12-01 13:18:02,995 [INFO] Skipping bill 1997622 - already processed (830/2605) 2025-12-01 13:18:02,995 [INFO] Processing 831/2605: Bill ID 2028594 2025-12-01 13:18:03,844 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:03,846 [ERROR] Failed to generate report for bill 2028594: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:04,856 [INFO] Processing 832/2605: Bill ID 2038620 2025-12-01 13:18:05,893 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:05,895 [ERROR] Failed to generate report for bill 2038620: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:06,904 [INFO] Processing 833/2605: Bill ID 2024637 2025-12-01 13:18:07,735 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:07,737 [ERROR] Failed to generate report for bill 2024637: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:08,747 [INFO] Skipping bill 1780182 - already processed (834/2605) 2025-12-01 13:18:08,748 [INFO] Skipping bill 1895692 - already processed (835/2605) 2025-12-01 13:18:08,748 [INFO] Skipping bill 1780190 - already processed (836/2605) 2025-12-01 13:18:08,749 [INFO] Skipping bill 1780196 - already processed (837/2605) 2025-12-01 13:18:08,749 [INFO] Skipping bill 1780166 - already processed (838/2605) 2025-12-01 13:18:08,749 [INFO] Skipping bill 1888099 - already processed (839/2605) 2025-12-01 13:18:08,749 [INFO] Skipping bill 1852983 - already processed (840/2605) 2025-12-01 13:18:08,749 [INFO] Skipping bill 1852813 - already processed (841/2605) 2025-12-01 13:18:08,749 [INFO] Skipping bill 2037995 - already processed (842/2605) 2025-12-01 13:18:08,749 [INFO] Skipping bill 2043787 - already processed (843/2605) 2025-12-01 13:18:08,750 [INFO] Skipping bill 2035241 - already processed (844/2605) 2025-12-01 13:18:08,750 [INFO] Skipping bill 2035278 - already processed (845/2605) 2025-12-01 13:18:08,750 [INFO] Skipping bill 2038014 - already processed (846/2605) 2025-12-01 13:18:08,750 [INFO] Skipping bill 2009885 - already processed (847/2605) 2025-12-01 13:18:08,750 [INFO] Skipping bill 2035768 - already processed (848/2605) 2025-12-01 13:18:08,750 [INFO] Skipping bill 2025453 - already processed (849/2605) 2025-12-01 13:18:08,750 [INFO] Skipping bill 2038856 - already processed (850/2605) 2025-12-01 13:18:08,751 [INFO] Skipping bill 2009892 - already processed (851/2605) 2025-12-01 13:18:08,751 [INFO] Skipping bill 1861260 - already processed (852/2605) 2025-12-01 13:18:08,751 [INFO] Skipping bill 1856334 - already processed (853/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1856821 - already processed (854/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1864646 - already processed (855/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1860647 - already processed (856/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1707979 - already processed (857/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1643078 - already processed (858/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1651590 - already processed (859/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1852405 - already processed (860/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1852812 - already processed (861/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1858711 - already processed (862/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1853103 - already processed (863/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1851979 - already processed (864/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1859186 - already processed (865/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1740589 - already processed (866/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1741802 - already processed (867/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1860410 - already processed (868/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1957720 - already processed (869/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1974786 - already processed (870/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1989670 - already processed (871/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1979597 - already processed (872/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 1984757 - already processed (873/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 2009204 - already processed (874/2605) 2025-12-01 13:18:08,752 [INFO] Skipping bill 2015254 - already processed (875/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1974962 - already processed (876/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2009276 - already processed (877/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1989103 - already processed (878/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1984950 - already processed (879/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1975975 - already processed (880/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2004610 - already processed (881/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2004938 - already processed (882/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1992603 - already processed (883/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1992640 - already processed (884/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1996293 - already processed (885/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2011831 - already processed (886/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2012661 - already processed (887/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1950967 - already processed (888/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1994787 - already processed (889/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2011159 - already processed (890/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2006411 - already processed (891/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2011256 - already processed (892/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2004789 - already processed (893/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1981280 - already processed (894/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 2009071 - already processed (895/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1967748 - already processed (896/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1707150 - already processed (897/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1669781 - already processed (898/2605) 2025-12-01 13:18:08,753 [INFO] Skipping bill 1643012 - already processed (899/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1848903 - already processed (900/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1848260 - already processed (901/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1820844 - already processed (902/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1851922 - already processed (903/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1850740 - already processed (904/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1838535 - already processed (905/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1851828 - already processed (906/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1863177 - already processed (907/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1852015 - already processed (908/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1818886 - already processed (909/2605) 2025-12-01 13:18:08,754 [INFO] Skipping bill 1852513 - already processed (910/2605) 2025-12-01 13:18:08,754 [INFO] Processing 911/2605: Bill ID 1851836 2025-12-01 13:18:09,377 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:09,380 [ERROR] Failed to generate report for bill 1851836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:10,390 [INFO] Skipping bill 1933975 - already processed (912/2605) 2025-12-01 13:18:10,390 [INFO] Skipping bill 1935092 - already processed (913/2605) 2025-12-01 13:18:10,391 [INFO] Skipping bill 1937681 - already processed (914/2605) 2025-12-01 13:18:10,391 [INFO] Skipping bill 1927333 - already processed (915/2605) 2025-12-01 13:18:10,391 [INFO] Skipping bill 1936069 - already processed (916/2605) 2025-12-01 13:18:10,391 [INFO] Skipping bill 1940299 - already processed (917/2605) 2025-12-01 13:18:10,391 [INFO] Skipping bill 1911677 - already processed (918/2605) 2025-12-01 13:18:10,391 [INFO] Skipping bill 1929973 - already processed (919/2605) 2025-12-01 13:18:10,392 [INFO] Skipping bill 1910359 - already processed (920/2605) 2025-12-01 13:18:10,392 [INFO] Skipping bill 1934687 - already processed (921/2605) 2025-12-01 13:18:10,392 [INFO] Skipping bill 1930038 - already processed (922/2605) 2025-12-01 13:18:10,392 [INFO] Skipping bill 1925325 - already processed (923/2605) 2025-12-01 13:18:10,392 [INFO] Skipping bill 1933890 - already processed (924/2605) 2025-12-01 13:18:10,393 [INFO] Skipping bill 1934898 - already processed (925/2605) 2025-12-01 13:18:10,393 [INFO] Skipping bill 2034194 - already processed (926/2605) 2025-12-01 13:18:10,393 [INFO] Skipping bill 1972440 - already processed (927/2605) 2025-12-01 13:18:10,393 [INFO] Skipping bill 1934020 - already processed (928/2605) 2025-12-01 13:18:10,393 [INFO] Skipping bill 1912210 - already processed (929/2605) 2025-12-01 13:18:10,393 [INFO] Skipping bill 1634819 - already processed (930/2605) 2025-12-01 13:18:10,393 [INFO] Skipping bill 1634779 - already processed (931/2605) 2025-12-01 13:18:10,394 [INFO] Skipping bill 1836873 - already processed (932/2605) 2025-12-01 13:18:10,394 [INFO] Skipping bill 1834678 - already processed (933/2605) 2025-12-01 13:18:10,394 [INFO] Skipping bill 1790707 - already processed (934/2605) 2025-12-01 13:18:10,394 [INFO] Skipping bill 1852775 - already processed (935/2605) 2025-12-01 13:18:10,394 [INFO] Skipping bill 1897040 - already processed (936/2605) 2025-12-01 13:18:10,394 [INFO] Skipping bill 1898466 - already processed (937/2605) 2025-12-01 13:18:10,394 [INFO] Skipping bill 1893847 - already processed (938/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1983834 - already processed (939/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1988287 - already processed (940/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1894415 - already processed (941/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1917533 - already processed (942/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1900966 - already processed (943/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1972401 - already processed (944/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1988699 - already processed (945/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1988844 - already processed (946/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1894126 - already processed (947/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1974757 - already processed (948/2605) 2025-12-01 13:18:10,395 [INFO] Skipping bill 1717719 - already processed (949/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1912107 - already processed (950/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1941091 - already processed (951/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1916250 - already processed (952/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1974033 - already processed (953/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1895954 - already processed (954/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1974042 - already processed (955/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1981849 - already processed (956/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1979780 - already processed (957/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1896111 - already processed (958/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1971592 - already processed (959/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1971640 - already processed (960/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1896588 - already processed (961/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1981663 - already processed (962/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1867796 - already processed (963/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1867828 - already processed (964/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1813907 - already processed (965/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1814493 - already processed (966/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1867439 - already processed (967/2605) 2025-12-01 13:18:10,396 [INFO] Skipping bill 1814241 - already processed (968/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1935238 - already processed (969/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1908945 - already processed (970/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1980982 - already processed (971/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1934094 - already processed (972/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1931194 - already processed (973/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1915534 - already processed (974/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1927914 - already processed (975/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1710815 - already processed (976/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1748189 - already processed (977/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1746365 - already processed (978/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1965229 - already processed (979/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1999738 - already processed (980/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1989648 - already processed (981/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1946188 - already processed (982/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1892638 - already processed (983/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1944647 - already processed (984/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1983017 - already processed (985/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1954626 - already processed (986/2605) 2025-12-01 13:18:10,397 [INFO] Skipping bill 1977147 - already processed (987/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 2013424 - already processed (988/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 2013451 - already processed (989/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1953001 - already processed (990/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1982880 - already processed (991/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1989793 - already processed (992/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1954479 - already processed (993/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 2031601 - already processed (994/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 2009433 - already processed (995/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1901514 - already processed (996/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1651925 - already processed (997/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1793373 - already processed (998/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1793039 - already processed (999/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1792971 - already processed (1000/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1793409 - already processed (1001/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1793958 - already processed (1002/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1793284 - already processed (1003/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1938552 - already processed (1004/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1922870 - already processed (1005/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1803710 - already processed (1006/2605) 2025-12-01 13:18:10,398 [INFO] Skipping bill 1889722 - already processed (1007/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1892083 - already processed (1008/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1889346 - already processed (1009/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1889719 - already processed (1010/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1889335 - already processed (1011/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1897572 - already processed (1012/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1887538 - already processed (1013/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1887101 - already processed (1014/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1888624 - already processed (1015/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1877673 - already processed (1016/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1897803 - already processed (1017/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1889758 - already processed (1018/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1897565 - already processed (1019/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1853521 - already processed (1020/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1864839 - already processed (1021/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1879513 - already processed (1022/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1878078 - already processed (1023/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 2013662 - already processed (1024/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1897603 - already processed (1025/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1881186 - already processed (1026/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1983797 - already processed (1027/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 2023789 - already processed (1028/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 1878049 - already processed (1029/2605) 2025-12-01 13:18:10,399 [INFO] Skipping bill 2052496 - already processed (1030/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1807241 - already processed (1031/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1881870 - already processed (1032/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1881843 - already processed (1033/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 2030230 - already processed (1034/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 2022901 - already processed (1035/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1896879 - already processed (1036/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1889701 - already processed (1037/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1970250 - already processed (1038/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 2037153 - already processed (1039/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 2013635 - already processed (1040/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1883140 - already processed (1041/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1853367 - already processed (1042/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1801284 - already processed (1043/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1889518 - already processed (1044/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1888073 - already processed (1045/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 2052173 - already processed (1046/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 2047520 - already processed (1047/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1889754 - already processed (1048/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1835303 - already processed (1049/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1949479 - already processed (1050/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 2022816 - already processed (1051/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1872559 - already processed (1052/2605) 2025-12-01 13:18:10,400 [INFO] Skipping bill 1875857 - already processed (1053/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1876467 - already processed (1054/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1876586 - already processed (1055/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 2038328 - already processed (1056/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1878887 - already processed (1057/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1853095 - already processed (1058/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1805407 - already processed (1059/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 2022907 - already processed (1060/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1949574 - already processed (1061/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1844841 - already processed (1062/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1864295 - already processed (1063/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1881176 - already processed (1064/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1837365 - already processed (1065/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1837180 - already processed (1066/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1887099 - already processed (1067/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 2028679 - already processed (1068/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 2030354 - already processed (1069/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1882474 - already processed (1070/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1964010 - already processed (1071/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 2008967 - already processed (1072/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1881178 - already processed (1073/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 2037324 - already processed (1074/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1806224 - already processed (1075/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1837135 - already processed (1076/2605) 2025-12-01 13:18:10,401 [INFO] Skipping bill 1805930 - already processed (1077/2605) 2025-12-01 13:18:10,402 [INFO] Skipping bill 1803406 - already processed (1078/2605) 2025-12-01 13:18:10,402 [INFO] Skipping bill 1883773 - already processed (1079/2605) 2025-12-01 13:18:10,402 [INFO] Skipping bill 1994137 - already processed (1080/2605) 2025-12-01 13:18:10,402 [INFO] Skipping bill 1881306 - already processed (1081/2605) 2025-12-01 13:18:10,402 [INFO] Skipping bill 1889726 - already processed (1082/2605) 2025-12-01 13:18:10,402 [INFO] Skipping bill 1889593 - already processed (1083/2605) 2025-12-01 13:18:10,402 [INFO] Processing 1084/2605: Bill ID 1883494 2025-12-01 13:18:11,129 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:11,131 [ERROR] Failed to generate report for bill 1883494: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:12,141 [INFO] Processing 1085/2605: Bill ID 1883535 2025-12-01 13:18:12,958 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:12,961 [ERROR] Failed to generate report for bill 1883535: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:13,971 [INFO] Processing 1086/2605: Bill ID 2038569 2025-12-01 13:18:14,880 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:14,882 [ERROR] Failed to generate report for bill 2038569: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:15,892 [INFO] Processing 1087/2605: Bill ID 2038571 2025-12-01 13:18:16,747 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:16,750 [ERROR] Failed to generate report for bill 2038571: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:17,760 [INFO] Skipping bill 1666814 - already processed (1088/2605) 2025-12-01 13:18:17,761 [INFO] Skipping bill 1722011 - already processed (1089/2605) 2025-12-01 13:18:17,761 [INFO] Skipping bill 1724398 - already processed (1090/2605) 2025-12-01 13:18:17,761 [INFO] Skipping bill 1676083 - already processed (1091/2605) 2025-12-01 13:18:17,762 [INFO] Skipping bill 1824011 - already processed (1092/2605) 2025-12-01 13:18:17,762 [INFO] Skipping bill 1824228 - already processed (1093/2605) 2025-12-01 13:18:17,762 [INFO] Skipping bill 1824028 - already processed (1094/2605) 2025-12-01 13:18:17,762 [INFO] Skipping bill 1834441 - already processed (1095/2605) 2025-12-01 13:18:17,762 [INFO] Skipping bill 1908238 - already processed (1096/2605) 2025-12-01 13:18:17,762 [INFO] Skipping bill 1967640 - already processed (1097/2605) 2025-12-01 13:18:17,762 [INFO] Skipping bill 1935448 - already processed (1098/2605) 2025-12-01 13:18:17,763 [INFO] Skipping bill 1987611 - already processed (1099/2605) 2025-12-01 13:18:17,763 [INFO] Skipping bill 1964156 - already processed (1100/2605) 2025-12-01 13:18:17,763 [INFO] Skipping bill 1947221 - already processed (1101/2605) 2025-12-01 13:18:17,763 [INFO] Skipping bill 1943110 - already processed (1102/2605) 2025-12-01 13:18:17,763 [INFO] Skipping bill 1964415 - already processed (1103/2605) 2025-12-01 13:18:17,763 [INFO] Skipping bill 1996731 - already processed (1104/2605) 2025-12-01 13:18:17,763 [INFO] Skipping bill 1944685 - already processed (1105/2605) 2025-12-01 13:18:17,764 [INFO] Skipping bill 1936020 - already processed (1106/2605) 2025-12-01 13:18:17,764 [INFO] Skipping bill 1947285 - already processed (1107/2605) 2025-12-01 13:18:17,764 [INFO] Skipping bill 1949498 - already processed (1108/2605) 2025-12-01 13:18:17,764 [INFO] Skipping bill 1933085 - already processed (1109/2605) 2025-12-01 13:18:17,764 [INFO] Skipping bill 1881403 - already processed (1110/2605) 2025-12-01 13:18:17,764 [INFO] Skipping bill 1878440 - already processed (1111/2605) 2025-12-01 13:18:17,764 [INFO] Skipping bill 1874641 - already processed (1112/2605) 2025-12-01 13:18:17,765 [INFO] Skipping bill 1780447 - already processed (1113/2605) 2025-12-01 13:18:17,765 [INFO] Skipping bill 1829313 - already processed (1114/2605) 2025-12-01 13:18:17,765 [INFO] Skipping bill 1876168 - already processed (1115/2605) 2025-12-01 13:18:17,765 [INFO] Skipping bill 1878357 - already processed (1116/2605) 2025-12-01 13:18:17,765 [INFO] Skipping bill 1801087 - already processed (1117/2605) 2025-12-01 13:18:17,765 [INFO] Skipping bill 1878533 - already processed (1118/2605) 2025-12-01 13:18:17,765 [INFO] Skipping bill 1781971 - already processed (1119/2605) 2025-12-01 13:18:17,765 [INFO] Skipping bill 1836944 - already processed (1120/2605) 2025-12-01 13:18:17,766 [INFO] Skipping bill 1773855 - already processed (1121/2605) 2025-12-01 13:18:17,766 [INFO] Skipping bill 1774758 - already processed (1122/2605) 2025-12-01 13:18:17,766 [INFO] Skipping bill 1779189 - already processed (1123/2605) 2025-12-01 13:18:17,766 [INFO] Skipping bill 1780403 - already processed (1124/2605) 2025-12-01 13:18:17,766 [INFO] Skipping bill 1882902 - already processed (1125/2605) 2025-12-01 13:18:17,766 [INFO] Skipping bill 1761023 - already processed (1126/2605) 2025-12-01 13:18:17,766 [INFO] Skipping bill 1763282 - already processed (1127/2605) 2025-12-01 13:18:17,767 [INFO] Skipping bill 1756406 - already processed (1128/2605) 2025-12-01 13:18:17,767 [INFO] Skipping bill 1721336 - already processed (1129/2605) 2025-12-01 13:18:17,767 [INFO] Skipping bill 1865663 - already processed (1130/2605) 2025-12-01 13:18:17,767 [INFO] Skipping bill 1884682 - already processed (1131/2605) 2025-12-01 13:18:17,767 [INFO] Skipping bill 1879124 - already processed (1132/2605) 2025-12-01 13:18:17,767 [INFO] Skipping bill 1813023 - already processed (1133/2605) 2025-12-01 13:18:17,767 [INFO] Skipping bill 1780572 - already processed (1134/2605) 2025-12-01 13:18:17,768 [INFO] Skipping bill 1796023 - already processed (1135/2605) 2025-12-01 13:18:17,768 [INFO] Skipping bill 1796213 - already processed (1136/2605) 2025-12-01 13:18:17,768 [INFO] Skipping bill 1841005 - already processed (1137/2605) 2025-12-01 13:18:17,768 [INFO] Skipping bill 1861287 - already processed (1138/2605) 2025-12-01 13:18:17,768 [INFO] Skipping bill 1878752 - already processed (1139/2605) 2025-12-01 13:18:17,768 [INFO] Skipping bill 1813101 - already processed (1140/2605) 2025-12-01 13:18:17,768 [INFO] Skipping bill 1768635 - already processed (1141/2605) 2025-12-01 13:18:17,768 [INFO] Skipping bill 1767924 - already processed (1142/2605) 2025-12-01 13:18:17,769 [INFO] Skipping bill 1641754 - already processed (1143/2605) 2025-12-01 13:18:17,769 [INFO] Skipping bill 1882889 - already processed (1144/2605) 2025-12-01 13:18:17,769 [INFO] Skipping bill 1729291 - already processed (1145/2605) 2025-12-01 13:18:17,769 [INFO] Skipping bill 1773906 - already processed (1146/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1839957 - already processed (1147/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1843965 - already processed (1148/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1879710 - already processed (1149/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1763606 - already processed (1150/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1780432 - already processed (1151/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1812765 - already processed (1152/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1836858 - already processed (1153/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1864293 - already processed (1154/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1770114 - already processed (1155/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1733127 - already processed (1156/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1762026 - already processed (1157/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1829537 - already processed (1158/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1878142 - already processed (1159/2605) 2025-12-01 13:18:17,770 [INFO] Skipping bill 1880765 - already processed (1160/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1762041 - already processed (1161/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1646230 - already processed (1162/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1762213 - already processed (1163/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1779393 - already processed (1164/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1878544 - already processed (1165/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1780459 - already processed (1166/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1781963 - already processed (1167/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1758293 - already processed (1168/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1768495 - already processed (1169/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1773860 - already processed (1170/2605) 2025-12-01 13:18:17,771 [INFO] Skipping bill 1864226 - already processed (1171/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1878400 - already processed (1172/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1879652 - already processed (1173/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1865798 - already processed (1174/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1862795 - already processed (1175/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1710243 - already processed (1176/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1818495 - already processed (1177/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1775864 - already processed (1178/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1856196 - already processed (1179/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1791835 - already processed (1180/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1658709 - already processed (1181/2605) 2025-12-01 13:18:17,772 [INFO] Skipping bill 1695187 - already processed (1182/2605) 2025-12-01 13:18:17,772 [INFO] Processing 1183/2605: Bill ID 1818780 2025-12-01 13:18:18,282 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:18,283 [ERROR] Failed to generate report for bill 1818780: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:19,288 [INFO] Processing 1184/2605: Bill ID 1818766 2025-12-01 13:18:19,768 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:19,772 [ERROR] Failed to generate report for bill 1818766: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:20,782 [INFO] Skipping bill 1752559 - already processed (1185/2605) 2025-12-01 13:18:20,783 [INFO] Skipping bill 1882942 - already processed (1186/2605) 2025-12-01 13:18:20,783 [INFO] Skipping bill 1766908 - already processed (1187/2605) 2025-12-01 13:18:20,783 [INFO] Skipping bill 1691064 - already processed (1188/2605) 2025-12-01 13:18:20,783 [INFO] Processing 1189/2605: Bill ID 1690030 2025-12-01 13:18:22,379 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:22,381 [ERROR] Failed to generate report for bill 1690030: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:23,392 [INFO] Processing 1190/2605: Bill ID 1690727 2025-12-01 13:18:24,938 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:24,941 [ERROR] Failed to generate report for bill 1690727: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:25,002 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:18:25,002 [INFO] Progress: 1190/2605 - Processed: 0, Skipped: 1139, Errors: 51 2025-12-01 13:18:26,008 [INFO] Processing 1191/2605: Bill ID 1875409 2025-12-01 13:18:29,447 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:29,450 [ERROR] Failed to generate report for bill 1875409: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:30,459 [INFO] Processing 1192/2605: Bill ID 1835820 2025-12-01 13:18:33,800 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:33,802 [ERROR] Failed to generate report for bill 1835820: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:34,814 [INFO] Processing 1193/2605: Bill ID 1818459 2025-12-01 13:18:37,428 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:37,430 [ERROR] Failed to generate report for bill 1818459: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:38,440 [INFO] Skipping bill 2009915 - already processed (1194/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 1917775 - already processed (1195/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 1902981 - already processed (1196/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 1908626 - already processed (1197/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 1903647 - already processed (1198/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 1993863 - already processed (1199/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 2015656 - already processed (1200/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 1909120 - already processed (1201/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 2032707 - already processed (1202/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 2030838 - already processed (1203/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 2033110 - already processed (1204/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 1992712 - already processed (1205/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 2010112 - already processed (1206/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 2035218 - already processed (1207/2605) 2025-12-01 13:18:38,441 [INFO] Skipping bill 1970759 - already processed (1208/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 1917262 - already processed (1209/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 2015645 - already processed (1210/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 1941920 - already processed (1211/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 2041695 - already processed (1212/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 2038940 - already processed (1213/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 2043998 - already processed (1214/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 1903496 - already processed (1215/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 1942114 - already processed (1216/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 1948978 - already processed (1217/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 2025948 - already processed (1218/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 2030449 - already processed (1219/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 2012463 - already processed (1220/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 2036382 - already processed (1221/2605) 2025-12-01 13:18:38,442 [INFO] Skipping bill 1901571 - already processed (1222/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 1902589 - already processed (1223/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 2045075 - already processed (1224/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 2042397 - already processed (1225/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 2005892 - already processed (1226/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 1995988 - already processed (1227/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 1941987 - already processed (1228/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 2051432 - already processed (1229/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 2030765 - already processed (1230/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 1900450 - already processed (1231/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 2032658 - already processed (1232/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 1934862 - already processed (1233/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 1954914 - already processed (1234/2605) 2025-12-01 13:18:38,443 [INFO] Skipping bill 1908970 - already processed (1235/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 2046810 - already processed (1236/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 1911503 - already processed (1237/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 1917449 - already processed (1238/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 2012421 - already processed (1239/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 2036409 - already processed (1240/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 1930912 - already processed (1241/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 2015571 - already processed (1242/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 1991849 - already processed (1243/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 1909237 - already processed (1244/2605) 2025-12-01 13:18:38,444 [INFO] Skipping bill 1907396 - already processed (1245/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2032681 - already processed (1246/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2031449 - already processed (1247/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2036417 - already processed (1248/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2010242 - already processed (1249/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 1902485 - already processed (1250/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2044029 - already processed (1251/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2039479 - already processed (1252/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 1993679 - already processed (1253/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 1927014 - already processed (1254/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2053531 - already processed (1255/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2012390 - already processed (1256/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 2051443 - already processed (1257/2605) 2025-12-01 13:18:38,445 [INFO] Skipping bill 1967476 - already processed (1258/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2039584 - already processed (1259/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1941925 - already processed (1260/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2039602 - already processed (1261/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2021091 - already processed (1262/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2053730 - already processed (1263/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1993748 - already processed (1264/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1907408 - already processed (1265/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2043429 - already processed (1266/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2036445 - already processed (1267/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1948575 - already processed (1268/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2020539 - already processed (1269/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1941981 - already processed (1270/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1985057 - already processed (1271/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2012554 - already processed (1272/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1900469 - already processed (1273/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1949091 - already processed (1274/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1903302 - already processed (1275/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 2031820 - already processed (1276/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1986509 - already processed (1277/2605) 2025-12-01 13:18:38,446 [INFO] Skipping bill 1992147 - already processed (1278/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1908565 - already processed (1279/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 2018195 - already processed (1280/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1948655 - already processed (1281/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1926957 - already processed (1282/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 2007650 - already processed (1283/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1938062 - already processed (1284/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1909167 - already processed (1285/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1910683 - already processed (1286/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1918276 - already processed (1287/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1942634 - already processed (1288/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1947885 - already processed (1289/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 2034828 - already processed (1290/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 2035534 - already processed (1291/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1937370 - already processed (1292/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 2036328 - already processed (1293/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1940048 - already processed (1294/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1990212 - already processed (1295/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1995017 - already processed (1296/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1937257 - already processed (1297/2605) 2025-12-01 13:18:38,447 [INFO] Skipping bill 1900853 - already processed (1298/2605) 2025-12-01 13:18:38,448 [INFO] Skipping bill 1947971 - already processed (1299/2605) 2025-12-01 13:18:38,448 [INFO] Skipping bill 1920984 - already processed (1300/2605) 2025-12-01 13:18:38,448 [INFO] Skipping bill 1902725 - already processed (1301/2605) 2025-12-01 13:18:38,448 [INFO] Skipping bill 1964016 - already processed (1302/2605) 2025-12-01 13:18:38,448 [INFO] Processing 1303/2605: Bill ID 1934576 2025-12-01 13:18:38,966 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:38,967 [ERROR] Failed to generate report for bill 1934576: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:39,977 [INFO] Skipping bill 1898800 - already processed (1304/2605) 2025-12-01 13:18:39,979 [INFO] Skipping bill 1971511 - already processed (1305/2605) 2025-12-01 13:18:39,979 [INFO] Processing 1306/2605: Bill ID 1935197 2025-12-01 13:18:40,607 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:40,609 [ERROR] Failed to generate report for bill 1935197: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:41,618 [INFO] Processing 1307/2605: Bill ID 1935040 2025-12-01 13:18:42,347 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:42,350 [ERROR] Failed to generate report for bill 1935040: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:43,359 [INFO] Skipping bill 1948521 - already processed (1308/2605) 2025-12-01 13:18:43,359 [INFO] Skipping bill 1977652 - already processed (1309/2605) 2025-12-01 13:18:43,359 [INFO] Processing 1310/2605: Bill ID 1934805 2025-12-01 13:18:43,883 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:43,884 [ERROR] Failed to generate report for bill 1934805: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:43,932 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:18:43,932 [INFO] Progress: 1310/2605 - Processed: 0, Skipped: 1252, Errors: 58 2025-12-01 13:18:44,938 [INFO] Skipping bill 1934970 - already processed (1311/2605) 2025-12-01 13:18:44,938 [INFO] Skipping bill 1934701 - already processed (1312/2605) 2025-12-01 13:18:44,938 [INFO] Skipping bill 1942260 - already processed (1313/2605) 2025-12-01 13:18:44,939 [INFO] Skipping bill 1917391 - already processed (1314/2605) 2025-12-01 13:18:44,939 [INFO] Processing 1315/2605: Bill ID 1935190 2025-12-01 13:18:47,979 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:47,983 [ERROR] Failed to generate report for bill 1935190: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:48,992 [INFO] Processing 1316/2605: Bill ID 1934636 2025-12-01 13:18:50,743 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:50,744 [ERROR] Failed to generate report for bill 1934636: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:51,753 [INFO] Processing 1317/2605: Bill ID 1935223 2025-12-01 13:18:53,714 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:53,716 [ERROR] Failed to generate report for bill 1935223: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:54,727 [INFO] Processing 1318/2605: Bill ID 1934824 2025-12-01 13:18:57,708 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:18:57,712 [ERROR] Failed to generate report for bill 1934824: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:18:58,722 [INFO] Processing 1319/2605: Bill ID 2052596 2025-12-01 13:19:02,520 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:02,523 [ERROR] Failed to generate report for bill 2052596: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:03,531 [INFO] Skipping bill 1879932 - already processed (1320/2605) 2025-12-01 13:19:03,531 [INFO] Skipping bill 1875738 - already processed (1321/2605) 2025-12-01 13:19:03,533 [INFO] Skipping bill 1875815 - already processed (1322/2605) 2025-12-01 13:19:03,533 [INFO] Skipping bill 1701253 - already processed (1323/2605) 2025-12-01 13:19:03,533 [INFO] Skipping bill 1875615 - already processed (1324/2605) 2025-12-01 13:19:03,533 [INFO] Skipping bill 1754315 - already processed (1325/2605) 2025-12-01 13:19:03,533 [INFO] Skipping bill 1751005 - already processed (1326/2605) 2025-12-01 13:19:03,533 [INFO] Skipping bill 1875642 - already processed (1327/2605) 2025-12-01 13:19:03,533 [INFO] Skipping bill 1753811 - already processed (1328/2605) 2025-12-01 13:19:03,534 [INFO] Skipping bill 1752050 - already processed (1329/2605) 2025-12-01 13:19:03,534 [INFO] Skipping bill 1704591 - already processed (1330/2605) 2025-12-01 13:19:03,534 [INFO] Skipping bill 1748551 - already processed (1331/2605) 2025-12-01 13:19:03,535 [INFO] Skipping bill 1725321 - already processed (1332/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 1725195 - already processed (1333/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2014434 - already processed (1334/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2014277 - already processed (1335/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2000124 - already processed (1336/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2022736 - already processed (1337/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2022881 - already processed (1338/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2014322 - already processed (1339/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2014068 - already processed (1340/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2005730 - already processed (1341/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2014594 - already processed (1342/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2013131 - already processed (1343/2605) 2025-12-01 13:19:03,536 [INFO] Skipping bill 2022220 - already processed (1344/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2008986 - already processed (1345/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2013796 - already processed (1346/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2014312 - already processed (1347/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2013903 - already processed (1348/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2013936 - already processed (1349/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2013868 - already processed (1350/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2014024 - already processed (1351/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2014377 - already processed (1352/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2017695 - already processed (1353/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2018632 - already processed (1354/2605) 2025-12-01 13:19:03,537 [INFO] Skipping bill 2022666 - already processed (1355/2605) 2025-12-01 13:19:03,538 [INFO] Skipping bill 2022828 - already processed (1356/2605) 2025-12-01 13:19:03,538 [INFO] Skipping bill 2015551 - already processed (1357/2605) 2025-12-01 13:19:03,538 [INFO] Skipping bill 2009244 - already processed (1358/2605) 2025-12-01 13:19:03,538 [INFO] Skipping bill 1969116 - already processed (1359/2605) 2025-12-01 13:19:03,538 [INFO] Skipping bill 2009761 - already processed (1360/2605) 2025-12-01 13:19:03,538 [INFO] Processing 1361/2605: Bill ID 2012916 2025-12-01 13:19:04,034 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:04,039 [ERROR] Failed to generate report for bill 2012916: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:05,047 [INFO] Skipping bill 1996111 - already processed (1362/2605) 2025-12-01 13:19:05,047 [INFO] Skipping bill 1656324 - already processed (1363/2605) 2025-12-01 13:19:05,047 [INFO] Skipping bill 1640560 - already processed (1364/2605) 2025-12-01 13:19:05,047 [INFO] Skipping bill 1644790 - already processed (1365/2605) 2025-12-01 13:19:05,047 [INFO] Skipping bill 1908973 - already processed (1366/2605) 2025-12-01 13:19:05,047 [INFO] Skipping bill 1930471 - already processed (1367/2605) 2025-12-01 13:19:05,047 [INFO] Skipping bill 1916131 - already processed (1368/2605) 2025-12-01 13:19:05,047 [INFO] Skipping bill 1916897 - already processed (1369/2605) 2025-12-01 13:19:05,048 [INFO] Skipping bill 1930219 - already processed (1370/2605) 2025-12-01 13:19:05,048 [INFO] Skipping bill 1916725 - already processed (1371/2605) 2025-12-01 13:19:05,048 [INFO] Skipping bill 1916697 - already processed (1372/2605) 2025-12-01 13:19:05,048 [INFO] Skipping bill 1921549 - already processed (1373/2605) 2025-12-01 13:19:05,048 [INFO] Skipping bill 1916032 - already processed (1374/2605) 2025-12-01 13:19:05,048 [INFO] Skipping bill 1915939 - already processed (1375/2605) 2025-12-01 13:19:05,052 [INFO] Skipping bill 1899315 - already processed (1376/2605) 2025-12-01 13:19:05,052 [INFO] Skipping bill 1930747 - already processed (1377/2605) 2025-12-01 13:19:05,052 [INFO] Skipping bill 1898936 - already processed (1378/2605) 2025-12-01 13:19:05,052 [INFO] Skipping bill 1828241 - already processed (1379/2605) 2025-12-01 13:19:05,052 [INFO] Skipping bill 1784887 - already processed (1380/2605) 2025-12-01 13:19:05,052 [INFO] Processing 1381/2605: Bill ID 1710984 2025-12-01 13:19:10,405 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:10,408 [ERROR] Failed to generate report for bill 1710984: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:11,420 [INFO] Processing 1382/2605: Bill ID 1710996 2025-12-01 13:19:14,296 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:14,298 [ERROR] Failed to generate report for bill 1710996: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:15,308 [INFO] Processing 1383/2605: Bill ID 1659671 2025-12-01 13:19:18,497 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:18,501 [ERROR] Failed to generate report for bill 1659671: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:19,510 [INFO] Skipping bill 2046561 - already processed (1384/2605) 2025-12-01 13:19:19,511 [INFO] Skipping bill 2018937 - already processed (1385/2605) 2025-12-01 13:19:19,511 [INFO] Skipping bill 2046538 - already processed (1386/2605) 2025-12-01 13:19:19,511 [INFO] Skipping bill 2038933 - already processed (1387/2605) 2025-12-01 13:19:19,511 [INFO] Skipping bill 2019064 - already processed (1388/2605) 2025-12-01 13:19:19,511 [INFO] Skipping bill 2051853 - already processed (1389/2605) 2025-12-01 13:19:19,511 [INFO] Skipping bill 1973495 - already processed (1390/2605) 2025-12-01 13:19:19,512 [INFO] Skipping bill 2044900 - already processed (1391/2605) 2025-12-01 13:19:19,512 [INFO] Skipping bill 2036911 - already processed (1392/2605) 2025-12-01 13:19:19,512 [INFO] Skipping bill 1956347 - already processed (1393/2605) 2025-12-01 13:19:19,512 [INFO] Skipping bill 2015680 - already processed (1394/2605) 2025-12-01 13:19:19,512 [INFO] Skipping bill 2035837 - already processed (1395/2605) 2025-12-01 13:19:19,512 [INFO] Skipping bill 2052361 - already processed (1396/2605) 2025-12-01 13:19:19,513 [INFO] Skipping bill 2053186 - already processed (1397/2605) 2025-12-01 13:19:19,513 [INFO] Skipping bill 1956501 - already processed (1398/2605) 2025-12-01 13:19:19,513 [INFO] Processing 1399/2605: Bill ID 1966320 2025-12-01 13:19:24,445 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:24,448 [ERROR] Failed to generate report for bill 1966320: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:25,458 [INFO] Processing 1400/2605: Bill ID 2044413 2025-12-01 13:19:26,280 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:26,282 [ERROR] Failed to generate report for bill 2044413: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281182 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281182 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:26,336 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:19:26,337 [INFO] Progress: 1400/2605 - Processed: 0, Skipped: 1331, Errors: 69 2025-12-01 13:19:27,342 [INFO] Processing 1401/2605: Bill ID 2031116 2025-12-01 13:19:28,326 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:28,329 [ERROR] Failed to generate report for bill 2031116: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:29,339 [INFO] Skipping bill 1820171 - already processed (1402/2605) 2025-12-01 13:19:29,340 [INFO] Skipping bill 1820684 - already processed (1403/2605) 2025-12-01 13:19:29,340 [INFO] Skipping bill 1820075 - already processed (1404/2605) 2025-12-01 13:19:29,340 [INFO] Skipping bill 1820478 - already processed (1405/2605) 2025-12-01 13:19:29,340 [INFO] Skipping bill 1820697 - already processed (1406/2605) 2025-12-01 13:19:29,341 [INFO] Skipping bill 1821348 - already processed (1407/2605) 2025-12-01 13:19:29,341 [INFO] Skipping bill 1819421 - already processed (1408/2605) 2025-12-01 13:19:29,341 [INFO] Skipping bill 1820795 - already processed (1409/2605) 2025-12-01 13:19:29,341 [INFO] Skipping bill 1814318 - already processed (1410/2605) 2025-12-01 13:19:29,341 [INFO] Skipping bill 1814441 - already processed (1411/2605) 2025-12-01 13:19:29,341 [INFO] Skipping bill 1791289 - already processed (1412/2605) 2025-12-01 13:19:29,342 [INFO] Skipping bill 1789468 - already processed (1413/2605) 2025-12-01 13:19:29,342 [INFO] Skipping bill 1924199 - already processed (1414/2605) 2025-12-01 13:19:29,342 [INFO] Skipping bill 1920208 - already processed (1415/2605) 2025-12-01 13:19:29,342 [INFO] Skipping bill 1920320 - already processed (1416/2605) 2025-12-01 13:19:29,342 [INFO] Skipping bill 1923586 - already processed (1417/2605) 2025-12-01 13:19:29,342 [INFO] Skipping bill 1918327 - already processed (1418/2605) 2025-12-01 13:19:29,342 [INFO] Skipping bill 1922702 - already processed (1419/2605) 2025-12-01 13:19:29,343 [INFO] Skipping bill 1923122 - already processed (1420/2605) 2025-12-01 13:19:29,343 [INFO] Skipping bill 1924269 - already processed (1421/2605) 2025-12-01 13:19:29,343 [INFO] Skipping bill 1925220 - already processed (1422/2605) 2025-12-01 13:19:29,343 [INFO] Skipping bill 1924640 - already processed (1423/2605) 2025-12-01 13:19:29,343 [INFO] Skipping bill 1924912 - already processed (1424/2605) 2025-12-01 13:19:29,343 [INFO] Skipping bill 1900252 - already processed (1425/2605) 2025-12-01 13:19:29,343 [INFO] Skipping bill 2018241 - already processed (1426/2605) 2025-12-01 13:19:29,343 [INFO] Skipping bill 1920876 - already processed (1427/2605) 2025-12-01 13:19:29,344 [INFO] Skipping bill 1920720 - already processed (1428/2605) 2025-12-01 13:19:29,344 [INFO] Skipping bill 1925546 - already processed (1429/2605) 2025-12-01 13:19:29,344 [INFO] Skipping bill 1903378 - already processed (1430/2605) 2025-12-01 13:19:29,344 [INFO] Skipping bill 1921990 - already processed (1431/2605) 2025-12-01 13:19:29,345 [INFO] Skipping bill 1922805 - already processed (1432/2605) 2025-12-01 13:19:29,346 [INFO] Skipping bill 1922842 - already processed (1433/2605) 2025-12-01 13:19:29,346 [INFO] Skipping bill 1836006 - already processed (1434/2605) 2025-12-01 13:19:29,346 [INFO] Skipping bill 1836109 - already processed (1435/2605) 2025-12-01 13:19:29,346 [INFO] Skipping bill 1843504 - already processed (1436/2605) 2025-12-01 13:19:29,346 [INFO] Skipping bill 1973003 - already processed (1437/2605) 2025-12-01 13:19:29,346 [INFO] Skipping bill 2009609 - already processed (1438/2605) 2025-12-01 13:19:29,346 [INFO] Skipping bill 1986214 - already processed (1439/2605) 2025-12-01 13:19:29,347 [INFO] Skipping bill 1912749 - already processed (1440/2605) 2025-12-01 13:19:29,347 [INFO] Skipping bill 1914095 - already processed (1441/2605) 2025-12-01 13:19:29,347 [INFO] Skipping bill 1914598 - already processed (1442/2605) 2025-12-01 13:19:29,347 [INFO] Skipping bill 1913104 - already processed (1443/2605) 2025-12-01 13:19:29,347 [INFO] Skipping bill 1914569 - already processed (1444/2605) 2025-12-01 13:19:29,347 [INFO] Skipping bill 1930373 - already processed (1445/2605) 2025-12-01 13:19:29,347 [INFO] Skipping bill 1982090 - already processed (1446/2605) 2025-12-01 13:19:29,348 [INFO] Skipping bill 1914274 - already processed (1447/2605) 2025-12-01 13:19:29,348 [INFO] Skipping bill 1982120 - already processed (1448/2605) 2025-12-01 13:19:29,348 [INFO] Skipping bill 1773806 - already processed (1449/2605) 2025-12-01 13:19:29,348 [INFO] Skipping bill 1880673 - already processed (1450/2605) 2025-12-01 13:19:29,348 [INFO] Skipping bill 1724997 - already processed (1451/2605) 2025-12-01 13:19:29,348 [INFO] Skipping bill 1775230 - already processed (1452/2605) 2025-12-01 13:19:29,348 [INFO] Skipping bill 1889846 - already processed (1453/2605) 2025-12-01 13:19:29,349 [INFO] Skipping bill 1773451 - already processed (1454/2605) 2025-12-01 13:19:29,349 [INFO] Skipping bill 1759469 - already processed (1455/2605) 2025-12-01 13:19:29,349 [INFO] Skipping bill 1777407 - already processed (1456/2605) 2025-12-01 13:19:29,349 [INFO] Skipping bill 1880554 - already processed (1457/2605) 2025-12-01 13:19:29,349 [INFO] Skipping bill 1854268 - already processed (1458/2605) 2025-12-01 13:19:29,349 [INFO] Skipping bill 1771135 - already processed (1459/2605) 2025-12-01 13:19:29,349 [INFO] Skipping bill 1830478 - already processed (1460/2605) 2025-12-01 13:19:29,350 [INFO] Skipping bill 1780085 - already processed (1461/2605) 2025-12-01 13:19:29,350 [INFO] Skipping bill 1858003 - already processed (1462/2605) 2025-12-01 13:19:29,350 [INFO] Skipping bill 1880735 - already processed (1463/2605) 2025-12-01 13:19:29,350 [INFO] Skipping bill 1882950 - already processed (1464/2605) 2025-12-01 13:19:29,350 [INFO] Skipping bill 1878925 - already processed (1465/2605) 2025-12-01 13:19:29,350 [INFO] Skipping bill 1878252 - already processed (1466/2605) 2025-12-01 13:19:29,350 [INFO] Skipping bill 1884263 - already processed (1467/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1873862 - already processed (1468/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1882265 - already processed (1469/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1771247 - already processed (1470/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1836612 - already processed (1471/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1820748 - already processed (1472/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1886418 - already processed (1473/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1769931 - already processed (1474/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1740020 - already processed (1475/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1878961 - already processed (1476/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1768592 - already processed (1477/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2045757 - already processed (1478/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2030536 - already processed (1479/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2047301 - already processed (1480/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2039357 - already processed (1481/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2034685 - already processed (1482/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2037642 - already processed (1483/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2022168 - already processed (1484/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2052644 - already processed (1485/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2051282 - already processed (1486/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1937863 - already processed (1487/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2043639 - already processed (1488/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2012593 - already processed (1489/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1991206 - already processed (1490/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1947924 - already processed (1491/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2012408 - already processed (1492/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2021116 - already processed (1493/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1973751 - already processed (1494/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 2045246 - already processed (1495/2605) 2025-12-01 13:19:29,351 [INFO] Skipping bill 1910852 - already processed (1496/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 1956391 - already processed (1497/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 2023404 - already processed (1498/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 2035307 - already processed (1499/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 1944456 - already processed (1500/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 2041064 - already processed (1501/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 2039278 - already processed (1502/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 2041823 - already processed (1503/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 1946034 - already processed (1504/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 2038442 - already processed (1505/2605) 2025-12-01 13:19:29,352 [INFO] Skipping bill 1905925 - already processed (1506/2605) 2025-12-01 13:19:29,352 [INFO] Processing 1507/2605: Bill ID 2041076 2025-12-01 13:19:29,861 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:29,863 [ERROR] Failed to generate report for bill 2041076: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:30,869 [INFO] Processing 1508/2605: Bill ID 2037948 2025-12-01 13:19:31,603 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:31,605 [ERROR] Failed to generate report for bill 2037948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:32,618 [INFO] Skipping bill 1757100 - already processed (1509/2605) 2025-12-01 13:19:32,618 [INFO] Skipping bill 1766918 - already processed (1510/2605) 2025-12-01 13:19:32,619 [INFO] Skipping bill 1691606 - already processed (1511/2605) 2025-12-01 13:19:32,619 [INFO] Skipping bill 1757087 - already processed (1512/2605) 2025-12-01 13:19:32,619 [INFO] Skipping bill 1691984 - already processed (1513/2605) 2025-12-01 13:19:32,619 [INFO] Skipping bill 1724146 - already processed (1514/2605) 2025-12-01 13:19:32,619 [INFO] Skipping bill 1811367 - already processed (1515/2605) 2025-12-01 13:19:32,620 [INFO] Skipping bill 1864559 - already processed (1516/2605) 2025-12-01 13:19:32,620 [INFO] Skipping bill 1833383 - already processed (1517/2605) 2025-12-01 13:19:32,620 [INFO] Skipping bill 1839979 - already processed (1518/2605) 2025-12-01 13:19:32,620 [INFO] Skipping bill 1863636 - already processed (1519/2605) 2025-12-01 13:19:32,620 [INFO] Skipping bill 1866932 - already processed (1520/2605) 2025-12-01 13:19:32,620 [INFO] Skipping bill 1829566 - already processed (1521/2605) 2025-12-01 13:19:32,620 [INFO] Skipping bill 1858179 - already processed (1522/2605) 2025-12-01 13:19:32,620 [INFO] Skipping bill 1857154 - already processed (1523/2605) 2025-12-01 13:19:32,621 [INFO] Skipping bill 1866872 - already processed (1524/2605) 2025-12-01 13:19:32,621 [INFO] Skipping bill 1844272 - already processed (1525/2605) 2025-12-01 13:19:32,621 [INFO] Skipping bill 1875576 - already processed (1526/2605) 2025-12-01 13:19:32,621 [INFO] Skipping bill 1875933 - already processed (1527/2605) 2025-12-01 13:19:32,621 [INFO] Skipping bill 1844730 - already processed (1528/2605) 2025-12-01 13:19:32,621 [INFO] Skipping bill 1858971 - already processed (1529/2605) 2025-12-01 13:19:32,621 [INFO] Skipping bill 1870027 - already processed (1530/2605) 2025-12-01 13:19:32,622 [INFO] Skipping bill 1994761 - already processed (1531/2605) 2025-12-01 13:19:32,622 [INFO] Skipping bill 1935080 - already processed (1532/2605) 2025-12-01 13:19:32,622 [INFO] Skipping bill 1945535 - already processed (1533/2605) 2025-12-01 13:19:32,622 [INFO] Skipping bill 1979504 - already processed (1534/2605) 2025-12-01 13:19:32,622 [INFO] Skipping bill 1937835 - already processed (1535/2605) 2025-12-01 13:19:32,622 [INFO] Skipping bill 1918971 - already processed (1536/2605) 2025-12-01 13:19:32,622 [INFO] Skipping bill 1986390 - already processed (1537/2605) 2025-12-01 13:19:32,623 [INFO] Skipping bill 1945988 - already processed (1538/2605) 2025-12-01 13:19:32,623 [INFO] Skipping bill 1940828 - already processed (1539/2605) 2025-12-01 13:19:32,623 [INFO] Skipping bill 1986602 - already processed (1540/2605) 2025-12-01 13:19:32,623 [INFO] Skipping bill 1988979 - already processed (1541/2605) 2025-12-01 13:19:32,623 [INFO] Skipping bill 2008057 - already processed (1542/2605) 2025-12-01 13:19:32,623 [INFO] Skipping bill 1986556 - already processed (1543/2605) 2025-12-01 13:19:32,623 [INFO] Skipping bill 1986569 - already processed (1544/2605) 2025-12-01 13:19:32,624 [INFO] Skipping bill 1988788 - already processed (1545/2605) 2025-12-01 13:19:32,624 [INFO] Skipping bill 2028551 - already processed (1546/2605) 2025-12-01 13:19:32,624 [INFO] Skipping bill 1937524 - already processed (1547/2605) 2025-12-01 13:19:32,624 [INFO] Skipping bill 1966994 - already processed (1548/2605) 2025-12-01 13:19:32,624 [INFO] Skipping bill 2030023 - already processed (1549/2605) 2025-12-01 13:19:32,624 [INFO] Skipping bill 1988713 - already processed (1550/2605) 2025-12-01 13:19:32,624 [INFO] Skipping bill 1988914 - already processed (1551/2605) 2025-12-01 13:19:32,624 [INFO] Skipping bill 2030055 - already processed (1552/2605) 2025-12-01 13:19:32,625 [INFO] Skipping bill 1666116 - already processed (1553/2605) 2025-12-01 13:19:32,625 [INFO] Skipping bill 1792231 - already processed (1554/2605) 2025-12-01 13:19:32,625 [INFO] Skipping bill 1802681 - already processed (1555/2605) 2025-12-01 13:19:32,625 [INFO] Skipping bill 1921522 - already processed (1556/2605) 2025-12-01 13:19:32,625 [INFO] Skipping bill 1999928 - already processed (1557/2605) 2025-12-01 13:19:32,625 [INFO] Skipping bill 2022730 - already processed (1558/2605) 2025-12-01 13:19:32,625 [INFO] Skipping bill 2024009 - already processed (1559/2605) 2025-12-01 13:19:32,626 [INFO] Skipping bill 1895318 - already processed (1560/2605) 2025-12-01 13:19:32,626 [INFO] Skipping bill 1944028 - already processed (1561/2605) 2025-12-01 13:19:32,626 [INFO] Skipping bill 1954350 - already processed (1562/2605) 2025-12-01 13:19:32,626 [INFO] Skipping bill 1954733 - already processed (1563/2605) 2025-12-01 13:19:32,626 [INFO] Skipping bill 2029172 - already processed (1564/2605) 2025-12-01 13:19:32,626 [INFO] Skipping bill 1944096 - already processed (1565/2605) 2025-12-01 13:19:32,626 [INFO] Skipping bill 1895182 - already processed (1566/2605) 2025-12-01 13:19:32,626 [INFO] Skipping bill 1919972 - already processed (1567/2605) 2025-12-01 13:19:32,627 [INFO] Skipping bill 1895637 - already processed (1568/2605) 2025-12-01 13:19:32,627 [INFO] Skipping bill 1819620 - already processed (1569/2605) 2025-12-01 13:19:32,627 [INFO] Skipping bill 1811138 - already processed (1570/2605) 2025-12-01 13:19:32,627 [INFO] Skipping bill 1948251 - already processed (1571/2605) 2025-12-01 13:19:32,627 [INFO] Skipping bill 1901594 - already processed (1572/2605) 2025-12-01 13:19:32,627 [INFO] Skipping bill 1833554 - already processed (1573/2605) 2025-12-01 13:19:32,627 [INFO] Skipping bill 1833050 - already processed (1574/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1830912 - already processed (1575/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1834207 - already processed (1576/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1795187 - already processed (1577/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1828458 - already processed (1578/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1808304 - already processed (1579/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1834240 - already processed (1580/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1831671 - already processed (1581/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1832378 - already processed (1582/2605) 2025-12-01 13:19:32,628 [INFO] Skipping bill 1828742 - already processed (1583/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1833429 - already processed (1584/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1828784 - already processed (1585/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1825620 - already processed (1586/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1799785 - already processed (1587/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1832466 - already processed (1588/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1831669 - already processed (1589/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1832147 - already processed (1590/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1831971 - already processed (1591/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1832437 - already processed (1592/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1828244 - already processed (1593/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1833731 - already processed (1594/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1833264 - already processed (1595/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1833393 - already processed (1596/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1825869 - already processed (1597/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1825916 - already processed (1598/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1873399 - already processed (1599/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1826595 - already processed (1600/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1832185 - already processed (1601/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1832434 - already processed (1602/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1831535 - already processed (1603/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1834179 - already processed (1604/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1834106 - already processed (1605/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1946381 - already processed (1606/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1953992 - already processed (1607/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1948149 - already processed (1608/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1959470 - already processed (1609/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1946783 - already processed (1610/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1955110 - already processed (1611/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1959302 - already processed (1612/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1959458 - already processed (1613/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1960722 - already processed (1614/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1951003 - already processed (1615/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1954702 - already processed (1616/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1954311 - already processed (1617/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1959312 - already processed (1618/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1959377 - already processed (1619/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1954015 - already processed (1620/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1954357 - already processed (1621/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1944274 - already processed (1622/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1944487 - already processed (1623/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1959723 - already processed (1624/2605) 2025-12-01 13:19:32,629 [INFO] Skipping bill 1960832 - already processed (1625/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1971015 - already processed (1626/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1971366 - already processed (1627/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1733375 - already processed (1628/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1700527 - already processed (1629/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1719413 - already processed (1630/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1694457 - already processed (1631/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1744060 - already processed (1632/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1727826 - already processed (1633/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1743424 - already processed (1634/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1732248 - already processed (1635/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1731629 - already processed (1636/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1769317 - already processed (1637/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1747471 - already processed (1638/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1747557 - already processed (1639/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1710763 - already processed (1640/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1782999 - already processed (1641/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1781207 - already processed (1642/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1726065 - already processed (1643/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1898826 - already processed (1644/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1992725 - already processed (1645/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1988473 - already processed (1646/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1970030 - already processed (1647/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 2007109 - already processed (1648/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1891805 - already processed (1649/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1949957 - already processed (1650/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1990181 - already processed (1651/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1991711 - already processed (1652/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1897779 - already processed (1653/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 2006851 - already processed (1654/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1975361 - already processed (1655/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1987235 - already processed (1656/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 2007736 - already processed (1657/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 2000200 - already processed (1658/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1923991 - already processed (1659/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1892858 - already processed (1660/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 2000248 - already processed (1661/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1971072 - already processed (1662/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 2008077 - already processed (1663/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1907668 - already processed (1664/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1962916 - already processed (1665/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 2005286 - already processed (1666/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 2005181 - already processed (1667/2605) 2025-12-01 13:19:32,630 [INFO] Skipping bill 1891063 - already processed (1668/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1900186 - already processed (1669/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1994657 - already processed (1670/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2008307 - already processed (1671/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1991260 - already processed (1672/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2006384 - already processed (1673/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2002051 - already processed (1674/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1973236 - already processed (1675/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2007316 - already processed (1676/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1890894 - already processed (1677/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2000178 - already processed (1678/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1982970 - already processed (1679/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2006497 - already processed (1680/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1890775 - already processed (1681/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1892224 - already processed (1682/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1954141 - already processed (1683/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2006579 - already processed (1684/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2006128 - already processed (1685/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2024097 - already processed (1686/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2034878 - already processed (1687/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1891396 - already processed (1688/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2040103 - already processed (1689/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2041986 - already processed (1690/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1987712 - already processed (1691/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2005998 - already processed (1692/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2008318 - already processed (1693/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1892843 - already processed (1694/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1946392 - already processed (1695/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1971169 - already processed (1696/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1890786 - already processed (1697/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1891256 - already processed (1698/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1942882 - already processed (1699/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2031981 - already processed (1700/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2033602 - already processed (1701/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2034279 - already processed (1702/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1974704 - already processed (1703/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1950849 - already processed (1704/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1975022 - already processed (1705/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1981850 - already processed (1706/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1890492 - already processed (1707/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2020803 - already processed (1708/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 2005343 - already processed (1709/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1890466 - already processed (1710/2605) 2025-12-01 13:19:32,631 [INFO] Skipping bill 1975612 - already processed (1711/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1994176 - already processed (1712/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1990550 - already processed (1713/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1891411 - already processed (1714/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1983542 - already processed (1715/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1999872 - already processed (1716/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 2007449 - already processed (1717/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 2039972 - already processed (1718/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1892428 - already processed (1719/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1891501 - already processed (1720/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 2007840 - already processed (1721/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1976041 - already processed (1722/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1992763 - already processed (1723/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1993770 - already processed (1724/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 2007872 - already processed (1725/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1936766 - already processed (1726/2605) 2025-12-01 13:19:32,632 [INFO] Skipping bill 1676049 - already processed (1727/2605) 2025-12-01 13:19:32,632 [INFO] Processing 1728/2605: Bill ID 1704512 2025-12-01 13:19:33,239 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:33,241 [ERROR] Failed to generate report for bill 1704512: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:34,248 [INFO] Skipping bill 1828750 - already processed (1729/2605) 2025-12-01 13:19:34,249 [INFO] Skipping bill 1823594 - already processed (1730/2605) 2025-12-01 13:19:34,249 [INFO] Skipping bill 1820331 - already processed (1731/2605) 2025-12-01 13:19:34,249 [INFO] Skipping bill 1810219 - already processed (1732/2605) 2025-12-01 13:19:34,249 [INFO] Skipping bill 1813477 - already processed (1733/2605) 2025-12-01 13:19:34,249 [INFO] Skipping bill 1858814 - already processed (1734/2605) 2025-12-01 13:19:34,249 [INFO] Skipping bill 1882805 - already processed (1735/2605) 2025-12-01 13:19:34,249 [INFO] Skipping bill 1811586 - already processed (1736/2605) 2025-12-01 13:19:34,249 [INFO] Skipping bill 1794392 - already processed (1737/2605) 2025-12-01 13:19:34,249 [INFO] Processing 1738/2605: Bill ID 1844899 2025-12-01 13:19:34,749 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:34,751 [ERROR] Failed to generate report for bill 1844899: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:35,759 [INFO] Skipping bill 1954171 - already processed (1739/2605) 2025-12-01 13:19:35,760 [INFO] Skipping bill 1911041 - already processed (1740/2605) 2025-12-01 13:19:35,760 [INFO] Skipping bill 1963098 - already processed (1741/2605) 2025-12-01 13:19:35,760 [INFO] Skipping bill 1943827 - already processed (1742/2605) 2025-12-01 13:19:35,761 [INFO] Skipping bill 1968353 - already processed (1743/2605) 2025-12-01 13:19:35,761 [INFO] Skipping bill 1981617 - already processed (1744/2605) 2025-12-01 13:19:35,761 [INFO] Skipping bill 1995499 - already processed (1745/2605) 2025-12-01 13:19:35,761 [INFO] Skipping bill 1954569 - already processed (1746/2605) 2025-12-01 13:19:35,762 [INFO] Skipping bill 1950395 - already processed (1747/2605) 2025-12-01 13:19:35,762 [INFO] Skipping bill 1989323 - already processed (1748/2605) 2025-12-01 13:19:35,762 [INFO] Skipping bill 1904576 - already processed (1749/2605) 2025-12-01 13:19:35,762 [INFO] Skipping bill 1968434 - already processed (1750/2605) 2025-12-01 13:19:35,762 [INFO] Processing 1751/2605: Bill ID 2046115 2025-12-01 13:19:36,826 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:36,829 [ERROR] Failed to generate report for bill 2046115: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:37,839 [INFO] Skipping bill 1912099 - already processed (1752/2605) 2025-12-01 13:19:37,840 [INFO] Skipping bill 1946923 - already processed (1753/2605) 2025-12-01 13:19:37,840 [INFO] Processing 1754/2605: Bill ID 2046119 2025-12-01 13:19:38,639 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:38,642 [ERROR] Failed to generate report for bill 2046119: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:39,651 [INFO] Processing 1755/2605: Bill ID 1897901 2025-12-01 13:19:40,994 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:40,997 [ERROR] Failed to generate report for bill 1897901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:42,007 [INFO] Processing 1756/2605: Bill ID 1948482 2025-12-01 13:19:42,970 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:42,972 [ERROR] Failed to generate report for bill 1948482: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:43,978 [INFO] Skipping bill 1800317 - already processed (1757/2605) 2025-12-01 13:19:43,979 [INFO] Skipping bill 1800156 - already processed (1758/2605) 2025-12-01 13:19:43,979 [INFO] Skipping bill 1854552 - already processed (1759/2605) 2025-12-01 13:19:43,979 [INFO] Skipping bill 1680053 - already processed (1760/2605) 2025-12-01 13:19:43,979 [INFO] Skipping bill 1682772 - already processed (1761/2605) 2025-12-01 13:19:43,979 [INFO] Skipping bill 1737434 - already processed (1762/2605) 2025-12-01 13:19:43,979 [INFO] Skipping bill 1981655 - already processed (1763/2605) 2025-12-01 13:19:43,980 [INFO] Skipping bill 1982851 - already processed (1764/2605) 2025-12-01 13:19:43,980 [INFO] Skipping bill 1934587 - already processed (1765/2605) 2025-12-01 13:19:43,980 [INFO] Skipping bill 1981303 - already processed (1766/2605) 2025-12-01 13:19:43,980 [INFO] Skipping bill 1983676 - already processed (1767/2605) 2025-12-01 13:19:43,980 [INFO] Skipping bill 1969845 - already processed (1768/2605) 2025-12-01 13:19:43,980 [INFO] Skipping bill 1983355 - already processed (1769/2605) 2025-12-01 13:19:43,980 [INFO] Skipping bill 2009795 - already processed (1770/2605) 2025-12-01 13:19:43,981 [INFO] Skipping bill 1973485 - already processed (1771/2605) 2025-12-01 13:19:43,981 [INFO] Skipping bill 1967494 - already processed (1772/2605) 2025-12-01 13:19:43,981 [INFO] Skipping bill 1973283 - already processed (1773/2605) 2025-12-01 13:19:43,981 [INFO] Skipping bill 1639846 - already processed (1774/2605) 2025-12-01 13:19:43,981 [INFO] Skipping bill 1646426 - already processed (1775/2605) 2025-12-01 13:19:43,981 [INFO] Skipping bill 1673591 - already processed (1776/2605) 2025-12-01 13:19:43,981 [INFO] Skipping bill 1639749 - already processed (1777/2605) 2025-12-01 13:19:43,981 [INFO] Skipping bill 1655379 - already processed (1778/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1630766 - already processed (1779/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1630878 - already processed (1780/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1630898 - already processed (1781/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1645265 - already processed (1782/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1650459 - already processed (1783/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1645172 - already processed (1784/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1630804 - already processed (1785/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1630761 - already processed (1786/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1652712 - already processed (1787/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1633968 - already processed (1788/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1644865 - already processed (1789/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1645061 - already processed (1790/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1809843 - already processed (1791/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1811981 - already processed (1792/2605) 2025-12-01 13:19:43,982 [INFO] Skipping bill 1812040 - already processed (1793/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1798563 - already processed (1794/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1807894 - already processed (1795/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1798580 - already processed (1796/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1800951 - already processed (1797/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1808295 - already processed (1798/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1799462 - already processed (1799/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1808024 - already processed (1800/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1807991 - already processed (1801/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1812376 - already processed (1802/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1822475 - already processed (1803/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1811644 - already processed (1804/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1794980 - already processed (1805/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1808264 - already processed (1806/2605) 2025-12-01 13:19:43,983 [INFO] Skipping bill 1801793 - already processed (1807/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1799221 - already processed (1808/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1822208 - already processed (1809/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1800673 - already processed (1810/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1809026 - already processed (1811/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1812182 - already processed (1812/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1886330 - already processed (1813/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1904645 - already processed (1814/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1911036 - already processed (1815/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1904674 - already processed (1816/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1901323 - already processed (1817/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1904347 - already processed (1818/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1925485 - already processed (1819/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1886222 - already processed (1820/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1905613 - already processed (1821/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1912330 - already processed (1822/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1914968 - already processed (1823/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1925408 - already processed (1824/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1886065 - already processed (1825/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1905445 - already processed (1826/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1905965 - already processed (1827/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1886188 - already processed (1828/2605) 2025-12-01 13:19:43,984 [INFO] Skipping bill 1905894 - already processed (1829/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1912145 - already processed (1830/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1927784 - already processed (1831/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1941702 - already processed (1832/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1929947 - already processed (1833/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1905942 - already processed (1834/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1912012 - already processed (1835/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1905698 - already processed (1836/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1886051 - already processed (1837/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1932239 - already processed (1838/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1932502 - already processed (1839/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1885937 - already processed (1840/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1900803 - already processed (1841/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1905712 - already processed (1842/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1905995 - already processed (1843/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1902641 - already processed (1844/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1905891 - already processed (1845/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1905860 - already processed (1846/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1908254 - already processed (1847/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1905920 - already processed (1848/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1886241 - already processed (1849/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1886007 - already processed (1850/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1896347 - already processed (1851/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1905982 - already processed (1852/2605) 2025-12-01 13:19:43,985 [INFO] Skipping bill 1898426 - already processed (1853/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1791614 - already processed (1854/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1792210 - already processed (1855/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1825997 - already processed (1856/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1792205 - already processed (1857/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1801141 - already processed (1858/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1796759 - already processed (1859/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1794124 - already processed (1860/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1680711 - already processed (1861/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1686234 - already processed (1862/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1813390 - already processed (1863/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1797745 - already processed (1864/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1810331 - already processed (1865/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1813358 - already processed (1866/2605) 2025-12-01 13:19:43,986 [INFO] Skipping bill 1657734 - already processed (1867/2605) 2025-12-01 13:19:43,986 [INFO] Processing 1868/2605: Bill ID 1644054 2025-12-01 13:19:45,221 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:45,222 [ERROR] Failed to generate report for bill 1644054: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:46,231 [INFO] Processing 1869/2605: Bill ID 1645282 2025-12-01 13:19:47,475 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:47,477 [ERROR] Failed to generate report for bill 1645282: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:48,485 [INFO] Processing 1870/2605: Bill ID 1644063 2025-12-01 13:19:49,146 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:49,148 [ERROR] Failed to generate report for bill 1644063: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:49,205 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:19:49,205 [INFO] Progress: 1870/2605 - Processed: 0, Skipped: 1789, Errors: 81 2025-12-01 13:19:50,210 [INFO] Processing 1871/2605: Bill ID 1645384 2025-12-01 13:19:50,854 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:50,856 [ERROR] Failed to generate report for bill 1645384: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:51,866 [INFO] Processing 1872/2605: Bill ID 1645468 2025-12-01 13:19:54,028 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:54,029 [ERROR] Failed to generate report for bill 1645468: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:55,039 [INFO] Processing 1873/2605: Bill ID 1796787 2025-12-01 13:19:56,282 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:56,284 [ERROR] Failed to generate report for bill 1796787: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:57,293 [INFO] Processing 1874/2605: Bill ID 1643905 2025-12-01 13:19:58,022 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:19:58,024 [ERROR] Failed to generate report for bill 1643905: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:19:59,036 [INFO] Processing 1875/2605: Bill ID 1796722 2025-12-01 13:20:00,377 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:00,379 [ERROR] Failed to generate report for bill 1796722: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:01,386 [INFO] Skipping bill 1952329 - already processed (1876/2605) 2025-12-01 13:20:01,387 [INFO] Skipping bill 1964254 - already processed (1877/2605) 2025-12-01 13:20:01,387 [INFO] Skipping bill 1904212 - already processed (1878/2605) 2025-12-01 13:20:01,387 [INFO] Skipping bill 1903879 - already processed (1879/2605) 2025-12-01 13:20:01,387 [INFO] Skipping bill 1930459 - already processed (1880/2605) 2025-12-01 13:20:01,387 [INFO] Skipping bill 1938736 - already processed (1881/2605) 2025-12-01 13:20:01,387 [INFO] Skipping bill 1941657 - already processed (1882/2605) 2025-12-01 13:20:01,387 [INFO] Skipping bill 1932498 - already processed (1883/2605) 2025-12-01 13:20:01,388 [INFO] Skipping bill 1898840 - already processed (1884/2605) 2025-12-01 13:20:01,388 [INFO] Skipping bill 1903962 - already processed (1885/2605) 2025-12-01 13:20:01,388 [INFO] Skipping bill 1943677 - already processed (1886/2605) 2025-12-01 13:20:01,388 [INFO] Skipping bill 1911202 - already processed (1887/2605) 2025-12-01 13:20:01,388 [INFO] Skipping bill 1898343 - already processed (1888/2605) 2025-12-01 13:20:01,388 [INFO] Skipping bill 1930701 - already processed (1889/2605) 2025-12-01 13:20:01,388 [INFO] Skipping bill 1911699 - already processed (1890/2605) 2025-12-01 13:20:01,389 [INFO] Skipping bill 1985707 - already processed (1891/2605) 2025-12-01 13:20:01,389 [INFO] Skipping bill 2025140 - already processed (1892/2605) 2025-12-01 13:20:01,389 [INFO] Processing 1893/2605: Bill ID 1916784 2025-12-01 13:20:02,117 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:02,119 [ERROR] Failed to generate report for bill 1916784: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:03,127 [INFO] Processing 1894/2605: Bill ID 1908012 2025-12-01 13:20:04,371 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:04,373 [ERROR] Failed to generate report for bill 1908012: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:05,377 [INFO] Processing 1895/2605: Bill ID 1907961 2025-12-01 13:20:06,729 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:06,730 [ERROR] Failed to generate report for bill 1907961: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:07,737 [INFO] Processing 1896/2605: Bill ID 1907826 2025-12-01 13:20:08,672 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:08,674 [ERROR] Failed to generate report for bill 1907826: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:09,681 [INFO] Processing 1897/2605: Bill ID 2023840 2025-12-01 13:20:11,641 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:11,643 [ERROR] Failed to generate report for bill 2023840: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:12,651 [INFO] Processing 1898/2605: Bill ID 1907778 2025-12-01 13:20:13,485 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:13,488 [ERROR] Failed to generate report for bill 1907778: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:14,496 [INFO] Skipping bill 1691917 - already processed (1899/2605) 2025-12-01 13:20:14,497 [INFO] Skipping bill 1695960 - already processed (1900/2605) 2025-12-01 13:20:14,497 [INFO] Skipping bill 1850601 - already processed (1901/2605) 2025-12-01 13:20:14,497 [INFO] Skipping bill 1838098 - already processed (1902/2605) 2025-12-01 13:20:14,498 [INFO] Skipping bill 1842521 - already processed (1903/2605) 2025-12-01 13:20:14,498 [INFO] Skipping bill 1809518 - already processed (1904/2605) 2025-12-01 13:20:14,498 [INFO] Skipping bill 1839623 - already processed (1905/2605) 2025-12-01 13:20:14,498 [INFO] Skipping bill 1836854 - already processed (1906/2605) 2025-12-01 13:20:14,499 [INFO] Skipping bill 1828203 - already processed (1907/2605) 2025-12-01 13:20:14,499 [INFO] Skipping bill 1823415 - already processed (1908/2605) 2025-12-01 13:20:14,499 [INFO] Processing 1909/2605: Bill ID 1809702 2025-12-01 13:20:15,533 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:15,536 [ERROR] Failed to generate report for bill 1809702: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:16,546 [INFO] Processing 1910/2605: Bill ID 1812739 2025-12-01 13:20:17,683 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:17,685 [ERROR] Failed to generate report for bill 1812739: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:17,740 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:20:17,740 [INFO] Progress: 1910/2605 - Processed: 0, Skipped: 1816, Errors: 94 2025-12-01 13:20:18,746 [INFO] Skipping bill 1993190 - already processed (1911/2605) 2025-12-01 13:20:18,746 [INFO] Skipping bill 2009723 - already processed (1912/2605) 2025-12-01 13:20:18,746 [INFO] Skipping bill 1970932 - already processed (1913/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 1990795 - already processed (1914/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 1966877 - already processed (1915/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 1972008 - already processed (1916/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 1994548 - already processed (1917/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 1991745 - already processed (1918/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 2010818 - already processed (1919/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 2003316 - already processed (1920/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 2021830 - already processed (1921/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 2009667 - already processed (1922/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 2011559 - already processed (1923/2605) 2025-12-01 13:20:18,747 [INFO] Skipping bill 1981081 - already processed (1924/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1990559 - already processed (1925/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1968858 - already processed (1926/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1841344 - already processed (1927/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1837111 - already processed (1928/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1783445 - already processed (1929/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1854251 - already processed (1930/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1867071 - already processed (1931/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1782940 - already processed (1932/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1780646 - already processed (1933/2605) 2025-12-01 13:20:18,748 [INFO] Skipping bill 1781005 - already processed (1934/2605) 2025-12-01 13:20:18,748 [INFO] Processing 1935/2605: Bill ID 1709614 2025-12-01 13:20:21,092 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:21,094 [ERROR] Failed to generate report for bill 1709614: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:22,102 [INFO] Processing 1936/2605: Bill ID 1709655 2025-12-01 13:20:24,851 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:24,853 [ERROR] Failed to generate report for bill 1709655: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:25,863 [INFO] Skipping bill 2034598 - already processed (1937/2605) 2025-12-01 13:20:25,864 [INFO] Skipping bill 2034722 - already processed (1938/2605) 2025-12-01 13:20:25,864 [INFO] Skipping bill 2038518 - already processed (1939/2605) 2025-12-01 13:20:25,865 [INFO] Skipping bill 2039752 - already processed (1940/2605) 2025-12-01 13:20:25,867 [INFO] Skipping bill 2044087 - already processed (1941/2605) 2025-12-01 13:20:25,867 [INFO] Skipping bill 2042614 - already processed (1942/2605) 2025-12-01 13:20:25,867 [INFO] Skipping bill 2045155 - already processed (1943/2605) 2025-12-01 13:20:25,868 [INFO] Skipping bill 2045662 - already processed (1944/2605) 2025-12-01 13:20:25,868 [INFO] Processing 1945/2605: Bill ID 1974122 2025-12-01 13:20:28,441 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:28,442 [ERROR] Failed to generate report for bill 1974122: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:29,451 [INFO] Processing 1946/2605: Bill ID 1974279 2025-12-01 13:20:32,020 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:32,022 [ERROR] Failed to generate report for bill 1974279: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:33,033 [INFO] Skipping bill 2047792 - already processed (1947/2605) 2025-12-01 13:20:33,034 [INFO] Skipping bill 1842729 - already processed (1948/2605) 2025-12-01 13:20:33,034 [INFO] Skipping bill 1842887 - already processed (1949/2605) 2025-12-01 13:20:33,034 [INFO] Skipping bill 1939111 - already processed (1950/2605) 2025-12-01 13:20:33,034 [INFO] Skipping bill 1895001 - already processed (1951/2605) 2025-12-01 13:20:33,034 [INFO] Skipping bill 1945993 - already processed (1952/2605) 2025-12-01 13:20:33,034 [INFO] Skipping bill 1945813 - already processed (1953/2605) 2025-12-01 13:20:33,034 [INFO] Skipping bill 1774433 - already processed (1954/2605) 2025-12-01 13:20:33,035 [INFO] Skipping bill 1884990 - already processed (1955/2605) 2025-12-01 13:20:33,036 [INFO] Skipping bill 1882572 - already processed (1956/2605) 2025-12-01 13:20:33,038 [INFO] Skipping bill 1784131 - already processed (1957/2605) 2025-12-01 13:20:33,038 [INFO] Skipping bill 1873726 - already processed (1958/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1882205 - already processed (1959/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1860116 - already processed (1960/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1835790 - already processed (1961/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1835624 - already processed (1962/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1876647 - already processed (1963/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1887447 - already processed (1964/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1898165 - already processed (1965/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1780760 - already processed (1966/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1887744 - already processed (1967/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1782128 - already processed (1968/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1887739 - already processed (1969/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1885322 - already processed (1970/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1887646 - already processed (1971/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1897119 - already processed (1972/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1782539 - already processed (1973/2605) 2025-12-01 13:20:33,039 [INFO] Skipping bill 1880117 - already processed (1974/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1810734 - already processed (1975/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1887671 - already processed (1976/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1883053 - already processed (1977/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1861062 - already processed (1978/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1775461 - already processed (1979/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1792331 - already processed (1980/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1765384 - already processed (1981/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1863023 - already processed (1982/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1883034 - already processed (1983/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1886748 - already processed (1984/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1886756 - already processed (1985/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1885278 - already processed (1986/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1784087 - already processed (1987/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1886439 - already processed (1988/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1877586 - already processed (1989/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1888775 - already processed (1990/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1773844 - already processed (1991/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1857956 - already processed (1992/2605) 2025-12-01 13:20:33,040 [INFO] Skipping bill 1775721 - already processed (1993/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1861016 - already processed (1994/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1884504 - already processed (1995/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1892975 - already processed (1996/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1886714 - already processed (1997/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1877214 - already processed (1998/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1779520 - already processed (1999/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1882161 - already processed (2000/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1793734 - already processed (2001/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1885501 - already processed (2002/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1887169 - already processed (2003/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1877680 - already processed (2004/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1887282 - already processed (2005/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1774766 - already processed (2006/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1774961 - already processed (2007/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1866654 - already processed (2008/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1779127 - already processed (2009/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1882224 - already processed (2010/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1892198 - already processed (2011/2605) 2025-12-01 13:20:33,041 [INFO] Skipping bill 1759862 - already processed (2012/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1888377 - already processed (2013/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1894701 - already processed (2014/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1864751 - already processed (2015/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1772453 - already processed (2016/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1885309 - already processed (2017/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1886447 - already processed (2018/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1848736 - already processed (2019/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1884301 - already processed (2020/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1881976 - already processed (2021/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1885426 - already processed (2022/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1775334 - already processed (2023/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1884442 - already processed (2024/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1881980 - already processed (2025/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1893238 - already processed (2026/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1865594 - already processed (2027/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1872732 - already processed (2028/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1885341 - already processed (2029/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1764018 - already processed (2030/2605) 2025-12-01 13:20:33,042 [INFO] Skipping bill 1887315 - already processed (2031/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1751404 - already processed (2032/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1888249 - already processed (2033/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1885249 - already processed (2034/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1881398 - already processed (2035/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1866637 - already processed (2036/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1770194 - already processed (2037/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1775580 - already processed (2038/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1784705 - already processed (2039/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1831382 - already processed (2040/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1885274 - already processed (2041/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1892393 - already processed (2042/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1877691 - already processed (2043/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1776083 - already processed (2044/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1760978 - already processed (2045/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1764682 - already processed (2046/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1880344 - already processed (2047/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1886698 - already processed (2048/2605) 2025-12-01 13:20:33,043 [INFO] Skipping bill 1876488 - already processed (2049/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1765330 - already processed (2050/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1887359 - already processed (2051/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1771744 - already processed (2052/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1831359 - already processed (2053/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1774102 - already processed (2054/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1774479 - already processed (2055/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1794846 - already processed (2056/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1894867 - already processed (2057/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1774859 - already processed (2058/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1884522 - already processed (2059/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1866979 - already processed (2060/2605) 2025-12-01 13:20:33,044 [INFO] Skipping bill 1886705 - already processed (2061/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1898170 - already processed (2062/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1885330 - already processed (2063/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1792286 - already processed (2064/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1892877 - already processed (2065/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1884177 - already processed (2066/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1774713 - already processed (2067/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1774626 - already processed (2068/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1884513 - already processed (2069/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1887362 - already processed (2070/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1893236 - already processed (2071/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1883668 - already processed (2072/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1831371 - already processed (2073/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1885671 - already processed (2074/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1885535 - already processed (2075/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1888766 - already processed (2076/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1892506 - already processed (2077/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1892532 - already processed (2078/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1878820 - already processed (2079/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1884926 - already processed (2080/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1895881 - already processed (2081/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1778284 - already processed (2082/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1770920 - already processed (2083/2605) 2025-12-01 13:20:33,045 [INFO] Skipping bill 1650801 - already processed (2084/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1883378 - already processed (2085/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1683970 - already processed (2086/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1772792 - already processed (2087/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1759623 - already processed (2088/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1760525 - already processed (2089/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1862531 - already processed (2090/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1767461 - already processed (2091/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1776485 - already processed (2092/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1871231 - already processed (2093/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1887711 - already processed (2094/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1893243 - already processed (2095/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1701254 - already processed (2096/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1897456 - already processed (2097/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1775615 - already processed (2098/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1794843 - already processed (2099/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1810720 - already processed (2100/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1894308 - already processed (2101/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1894683 - already processed (2102/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1842456 - already processed (2103/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1885281 - already processed (2104/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1759897 - already processed (2105/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1860079 - already processed (2106/2605) 2025-12-01 13:20:33,046 [INFO] Skipping bill 1746098 - already processed (2107/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1897489 - already processed (2108/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1887287 - already processed (2109/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1885252 - already processed (2110/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1892936 - already processed (2111/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1732925 - already processed (2112/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1746069 - already processed (2113/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1774408 - already processed (2114/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1772182 - already processed (2115/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1884422 - already processed (2116/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1687118 - already processed (2117/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1784726 - already processed (2118/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1762912 - already processed (2119/2605) 2025-12-01 13:20:33,047 [INFO] Skipping bill 1898405 - already processed (2120/2605) 2025-12-01 13:20:33,047 [INFO] Processing 2121/2605: Bill ID 1884189 2025-12-01 13:20:34,478 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:34,480 [ERROR] Failed to generate report for bill 1884189: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:35,486 [INFO] Skipping bill 1899847 - already processed (2122/2605) 2025-12-01 13:20:35,487 [INFO] Skipping bill 1732984 - already processed (2123/2605) 2025-12-01 13:20:35,487 [INFO] Skipping bill 1746089 - already processed (2124/2605) 2025-12-01 13:20:35,487 [INFO] Skipping bill 1766726 - already processed (2125/2605) 2025-12-01 13:20:35,487 [INFO] Skipping bill 1769804 - already processed (2126/2605) 2025-12-01 13:20:35,487 [INFO] Skipping bill 1897097 - already processed (2127/2605) 2025-12-01 13:20:35,487 [INFO] Processing 2128/2605: Bill ID 1774177 2025-12-01 13:20:37,038 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:37,042 [ERROR] Failed to generate report for bill 1774177: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:38,053 [INFO] Skipping bill 1757049 - already processed (2129/2605) 2025-12-01 13:20:38,054 [INFO] Skipping bill 1784298 - already processed (2130/2605) 2025-12-01 13:20:38,054 [INFO] Skipping bill 1785108 - already processed (2131/2605) 2025-12-01 13:20:38,054 [INFO] Skipping bill 1772128 - already processed (2132/2605) 2025-12-01 13:20:38,054 [INFO] Skipping bill 1879910 - already processed (2133/2605) 2025-12-01 13:20:38,055 [INFO] Skipping bill 1777717 - already processed (2134/2605) 2025-12-01 13:20:38,055 [INFO] Skipping bill 1843401 - already processed (2135/2605) 2025-12-01 13:20:38,055 [INFO] Skipping bill 1774203 - already processed (2136/2605) 2025-12-01 13:20:38,055 [INFO] Skipping bill 1892268 - already processed (2137/2605) 2025-12-01 13:20:38,055 [INFO] Skipping bill 1774216 - already processed (2138/2605) 2025-12-01 13:20:38,055 [INFO] Skipping bill 1868870 - already processed (2139/2605) 2025-12-01 13:20:38,056 [INFO] Skipping bill 1770792 - already processed (2140/2605) 2025-12-01 13:20:38,056 [INFO] Skipping bill 1894823 - already processed (2141/2605) 2025-12-01 13:20:38,056 [INFO] Skipping bill 1885629 - already processed (2142/2605) 2025-12-01 13:20:38,056 [INFO] Skipping bill 1866980 - already processed (2143/2605) 2025-12-01 13:20:38,056 [INFO] Skipping bill 1826236 - already processed (2144/2605) 2025-12-01 13:20:38,056 [INFO] Skipping bill 1860115 - already processed (2145/2605) 2025-12-01 13:20:38,056 [INFO] Skipping bill 1767424 - already processed (2146/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1877069 - already processed (2147/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1865576 - already processed (2148/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1771076 - already processed (2149/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1755580 - already processed (2150/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1885029 - already processed (2151/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1770955 - already processed (2152/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1772617 - already processed (2153/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1760193 - already processed (2154/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1871212 - already processed (2155/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1887934 - already processed (2156/2605) 2025-12-01 13:20:38,057 [INFO] Skipping bill 1879177 - already processed (2157/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1897536 - already processed (2158/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1854133 - already processed (2159/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1761508 - already processed (2160/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1777284 - already processed (2161/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1774079 - already processed (2162/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1896271 - already processed (2163/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1897312 - already processed (2164/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1774750 - already processed (2165/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1873661 - already processed (2166/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1782516 - already processed (2167/2605) 2025-12-01 13:20:38,058 [INFO] Skipping bill 1782446 - already processed (2168/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1866649 - already processed (2169/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1866664 - already processed (2170/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1707867 - already processed (2171/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1872167 - already processed (2172/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1759875 - already processed (2173/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1789214 - already processed (2174/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1872153 - already processed (2175/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1760229 - already processed (2176/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1774942 - already processed (2177/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1694059 - already processed (2178/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1829219 - already processed (2179/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1679271 - already processed (2180/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1883365 - already processed (2181/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1780777 - already processed (2182/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1707919 - already processed (2183/2605) 2025-12-01 13:20:38,059 [INFO] Skipping bill 1860113 - already processed (2184/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1781933 - already processed (2185/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1751388 - already processed (2186/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1754500 - already processed (2187/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1772123 - already processed (2188/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1892924 - already processed (2189/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1778422 - already processed (2190/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1897294 - already processed (2191/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1769557 - already processed (2192/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1747003 - already processed (2193/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1775420 - already processed (2194/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1885460 - already processed (2195/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1778494 - already processed (2196/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1778507 - already processed (2197/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1746072 - already processed (2198/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1747808 - already processed (2199/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1764055 - already processed (2200/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1765960 - already processed (2201/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1766587 - already processed (2202/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1766736 - already processed (2203/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1771518 - already processed (2204/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1772577 - already processed (2205/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1772933 - already processed (2206/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1773303 - already processed (2207/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1775354 - already processed (2208/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1777649 - already processed (2209/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1783786 - already processed (2210/2605) 2025-12-01 13:20:38,060 [INFO] Skipping bill 1783927 - already processed (2211/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1791735 - already processed (2212/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1791984 - already processed (2213/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1860914 - already processed (2214/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1874964 - already processed (2215/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1876702 - already processed (2216/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1878298 - already processed (2217/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1878970 - already processed (2218/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1878883 - already processed (2219/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1880262 - already processed (2220/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1880301 - already processed (2221/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1880312 - already processed (2222/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1882770 - already processed (2223/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1889897 - already processed (2224/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1892711 - already processed (2225/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1897258 - already processed (2226/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1881528 - already processed (2227/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1782893 - already processed (2228/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1834554 - already processed (2229/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1774082 - already processed (2230/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1783631 - already processed (2231/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1879351 - already processed (2232/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1707921 - already processed (2233/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1872751 - already processed (2234/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1848738 - already processed (2235/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1882577 - already processed (2236/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1880072 - already processed (2237/2605) 2025-12-01 13:20:38,061 [INFO] Skipping bill 1880345 - already processed (2238/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1892804 - already processed (2239/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1860940 - already processed (2240/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1766003 - already processed (2241/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1775441 - already processed (2242/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1758619 - already processed (2243/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1894461 - already processed (2244/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1778171 - already processed (2245/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1778004 - already processed (2246/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1832839 - already processed (2247/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1774844 - already processed (2248/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1751449 - already processed (2249/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1751346 - already processed (2250/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1759080 - already processed (2251/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1882756 - already processed (2252/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1882766 - already processed (2253/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1887196 - already processed (2254/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1889949 - already processed (2255/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1887718 - already processed (2256/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1896232 - already processed (2257/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1783562 - already processed (2258/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1681772 - already processed (2259/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1871711 - already processed (2260/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1874986 - already processed (2261/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1772204 - already processed (2262/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1884912 - already processed (2263/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1888175 - already processed (2264/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1832721 - already processed (2265/2605) 2025-12-01 13:20:38,062 [INFO] Skipping bill 1887649 - already processed (2266/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1887704 - already processed (2267/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1881672 - already processed (2268/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1777454 - already processed (2269/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1882397 - already processed (2270/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1766671 - already processed (2271/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1775036 - already processed (2272/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1694305 - already processed (2273/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1863407 - already processed (2274/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1746051 - already processed (2275/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1882537 - already processed (2276/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1873551 - already processed (2277/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1762960 - already processed (2278/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1887303 - already processed (2279/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1887118 - already processed (2280/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1775679 - already processed (2281/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1882373 - already processed (2282/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1862520 - already processed (2283/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1886817 - already processed (2284/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1750558 - already processed (2285/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1750336 - already processed (2286/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1694173 - already processed (2287/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1864746 - already processed (2288/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1887915 - already processed (2289/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1774093 - already processed (2290/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1650659 - already processed (2291/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1694050 - already processed (2292/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1771092 - already processed (2293/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1876599 - already processed (2294/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1835788 - already processed (2295/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1782691 - already processed (2296/2605) 2025-12-01 13:20:38,063 [INFO] Skipping bill 1876668 - already processed (2297/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1729737 - already processed (2298/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1766627 - already processed (2299/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1885388 - already processed (2300/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1887130 - already processed (2301/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1775597 - already processed (2302/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1793999 - already processed (2303/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1789198 - already processed (2304/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1888330 - already processed (2305/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1882746 - already processed (2306/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1694182 - already processed (2307/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1860920 - already processed (2308/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1774448 - already processed (2309/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1774405 - already processed (2310/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1876990 - already processed (2311/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1876679 - already processed (2312/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1881973 - already processed (2313/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1717622 - already processed (2314/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1885510 - already processed (2315/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1871269 - already processed (2316/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1774266 - already processed (2317/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1785924 - already processed (2318/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1779428 - already processed (2319/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1775195 - already processed (2320/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1775134 - already processed (2321/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1743524 - already processed (2322/2605) 2025-12-01 13:20:38,064 [INFO] Skipping bill 1757473 - already processed (2323/2605) 2025-12-01 13:20:38,064 [INFO] Processing 2324/2605: Bill ID 1857970 2025-12-01 13:20:38,776 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:38,777 [ERROR] Failed to generate report for bill 1857970: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:39,786 [INFO] Skipping bill 1883678 - already processed (2325/2605) 2025-12-01 13:20:39,787 [INFO] Processing 2326/2605: Bill ID 1897245 2025-12-01 13:20:41,177 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:41,178 [ERROR] Failed to generate report for bill 1897245: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:42,189 [INFO] Skipping bill 1894517 - already processed (2327/2605) 2025-12-01 13:20:42,190 [INFO] Processing 2328/2605: Bill ID 1898241 2025-12-01 13:20:43,079 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:43,081 [ERROR] Failed to generate report for bill 1898241: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:44,091 [INFO] Processing 2329/2605: Bill ID 1879854 2025-12-01 13:20:45,128 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:45,130 [ERROR] Failed to generate report for bill 1879854: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:46,139 [INFO] Skipping bill 1888278 - already processed (2330/2605) 2025-12-01 13:20:46,140 [INFO] Skipping bill 1879169 - already processed (2331/2605) 2025-12-01 13:20:46,140 [INFO] Skipping bill 1860989 - already processed (2332/2605) 2025-12-01 13:20:46,140 [INFO] Skipping bill 1758024 - already processed (2333/2605) 2025-12-01 13:20:46,140 [INFO] Skipping bill 1863932 - already processed (2334/2605) 2025-12-01 13:20:46,141 [INFO] Processing 2335/2605: Bill ID 1771174 2025-12-01 13:20:46,970 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:46,972 [ERROR] Failed to generate report for bill 1771174: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:47,978 [INFO] Skipping bill 1772600 - already processed (2336/2605) 2025-12-01 13:20:47,979 [INFO] Skipping bill 1760911 - already processed (2337/2605) 2025-12-01 13:20:47,979 [INFO] Skipping bill 1789291 - already processed (2338/2605) 2025-12-01 13:20:47,979 [INFO] Skipping bill 1764694 - already processed (2339/2605) 2025-12-01 13:20:47,979 [INFO] Skipping bill 1764770 - already processed (2340/2605) 2025-12-01 13:20:47,979 [INFO] Skipping bill 1884949 - already processed (2341/2605) 2025-12-01 13:20:47,979 [INFO] Processing 2342/2605: Bill ID 1897528 2025-12-01 13:20:48,607 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:48,609 [ERROR] Failed to generate report for bill 1897528: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:49,617 [INFO] Processing 2343/2605: Bill ID 1898192 2025-12-01 13:20:50,040 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:50,041 [ERROR] Failed to generate report for bill 1898192: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:51,051 [INFO] Skipping bill 1774988 - already processed (2344/2605) 2025-12-01 13:20:51,052 [INFO] Processing 2345/2605: Bill ID 1892419 2025-12-01 13:20:52,500 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:52,503 [ERROR] Failed to generate report for bill 1892419: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:53,514 [INFO] Processing 2346/2605: Bill ID 1884946 2025-12-01 13:20:55,163 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:55,165 [ERROR] Failed to generate report for bill 1884946: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:56,176 [INFO] Processing 2347/2605: Bill ID 1885067 2025-12-01 13:20:57,722 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:20:57,725 [ERROR] Failed to generate report for bill 1885067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:20:58,735 [INFO] Skipping bill 1879669 - already processed (2348/2605) 2025-12-01 13:20:58,736 [INFO] Processing 2349/2605: Bill ID 1897089 2025-12-01 13:21:00,180 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:21:00,182 [ERROR] Failed to generate report for bill 1897089: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:21:01,188 [INFO] Skipping bill 2041135 - already processed (2350/2605) 2025-12-01 13:21:01,188 [INFO] Skipping bill 2037217 - already processed (2351/2605) 2025-12-01 13:21:01,188 [INFO] Skipping bill 2022578 - already processed (2352/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 2045360 - already processed (2353/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 2044380 - already processed (2354/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 1987991 - already processed (2355/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 2040591 - already processed (2356/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 2044133 - already processed (2357/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 2040128 - already processed (2358/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 2022459 - already processed (2359/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 2046890 - already processed (2360/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 1948171 - already processed (2361/2605) 2025-12-01 13:21:01,189 [INFO] Skipping bill 2047758 - already processed (2362/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 2029224 - already processed (2363/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 2044676 - already processed (2364/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 2041169 - already processed (2365/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 2043072 - already processed (2366/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 2015628 - already processed (2367/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 2029917 - already processed (2368/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 2029601 - already processed (2369/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 1988067 - already processed (2370/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 1964814 - already processed (2371/2605) 2025-12-01 13:21:01,190 [INFO] Skipping bill 2043727 - already processed (2372/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 1988016 - already processed (2373/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 2037684 - already processed (2374/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 2029576 - already processed (2375/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 2008640 - already processed (2376/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 2042761 - already processed (2377/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 2043628 - already processed (2378/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 2039925 - already processed (2379/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 1990438 - already processed (2380/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 2014950 - already processed (2381/2605) 2025-12-01 13:21:01,191 [INFO] Skipping bill 2046871 - already processed (2382/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2008541 - already processed (2383/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2019807 - already processed (2384/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2032195 - already processed (2385/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2032174 - already processed (2386/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2053144 - already processed (2387/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2045181 - already processed (2388/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2035367 - already processed (2389/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2022504 - already processed (2390/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2051717 - already processed (2391/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2040216 - already processed (2392/2605) 2025-12-01 13:21:01,192 [INFO] Skipping bill 2038243 - already processed (2393/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2038240 - already processed (2394/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 1958579 - already processed (2395/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2041151 - already processed (2396/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2040068 - already processed (2397/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2051901 - already processed (2398/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2035878 - already processed (2399/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2043698 - already processed (2400/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2043764 - already processed (2401/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2047702 - already processed (2402/2605) 2025-12-01 13:21:01,193 [INFO] Skipping bill 2034541 - already processed (2403/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2036108 - already processed (2404/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2052002 - already processed (2405/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2036914 - already processed (2406/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2032053 - already processed (2407/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2032068 - already processed (2408/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2045357 - already processed (2409/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2043047 - already processed (2410/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2040306 - already processed (2411/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 1916986 - already processed (2412/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2039821 - already processed (2413/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2047752 - already processed (2414/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2046891 - already processed (2415/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2040880 - already processed (2416/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2040851 - already processed (2417/2605) 2025-12-01 13:21:01,194 [INFO] Skipping bill 2043722 - already processed (2418/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 1987950 - already processed (2419/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2040439 - already processed (2420/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 1901865 - already processed (2421/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 1905283 - already processed (2422/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2042107 - already processed (2423/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 1986270 - already processed (2424/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2044713 - already processed (2425/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2041468 - already processed (2426/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 1983900 - already processed (2427/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2020217 - already processed (2428/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2038216 - already processed (2429/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2043604 - already processed (2430/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2045365 - already processed (2431/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2043961 - already processed (2432/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2044138 - already processed (2433/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2040354 - already processed (2434/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 2053157 - already processed (2435/2605) 2025-12-01 13:21:01,195 [INFO] Skipping bill 1984221 - already processed (2436/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2033224 - already processed (2437/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2033186 - already processed (2438/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 1970505 - already processed (2439/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2036132 - already processed (2440/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2033542 - already processed (2441/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2027361 - already processed (2442/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2040866 - already processed (2443/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2043357 - already processed (2444/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2041757 - already processed (2445/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2042653 - already processed (2446/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2043161 - already processed (2447/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2052989 - already processed (2448/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 1965963 - already processed (2449/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2045735 - already processed (2450/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 1999388 - already processed (2451/2605) 2025-12-01 13:21:01,196 [INFO] Skipping bill 2051352 - already processed (2452/2605) 2025-12-01 13:21:01,196 [INFO] Processing 2453/2605: Bill ID 2039530 2025-12-01 13:21:02,843 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:21:02,845 [ERROR] Failed to generate report for bill 2039530: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:21:03,853 [INFO] Skipping bill 2051886 - already processed (2454/2605) 2025-12-01 13:21:03,855 [INFO] Skipping bill 2043562 - already processed (2455/2605) 2025-12-01 13:21:03,855 [INFO] Skipping bill 1970493 - already processed (2456/2605) 2025-12-01 13:21:03,855 [INFO] Skipping bill 2037978 - already processed (2457/2605) 2025-12-01 13:21:03,855 [INFO] Skipping bill 2040318 - already processed (2458/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 2041104 - already processed (2459/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 2043947 - already processed (2460/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 2038111 - already processed (2461/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 1982722 - already processed (2462/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 2043896 - already processed (2463/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 2012870 - already processed (2464/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 2007066 - already processed (2465/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 1968860 - already processed (2466/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 2029307 - already processed (2467/2605) 2025-12-01 13:21:03,856 [INFO] Skipping bill 2041255 - already processed (2468/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 2033191 - already processed (2469/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 2043715 - already processed (2470/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 2036439 - already processed (2471/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 1968282 - already processed (2472/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 2039688 - already processed (2473/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 2038212 - already processed (2474/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 1987966 - already processed (2475/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 2031847 - already processed (2476/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 1970497 - already processed (2477/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 1963353 - already processed (2478/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 2046183 - already processed (2479/2605) 2025-12-01 13:21:03,857 [INFO] Skipping bill 2005587 - already processed (2480/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2039178 - already processed (2481/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2041269 - already processed (2482/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2043688 - already processed (2483/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 1927158 - already processed (2484/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 1987972 - already processed (2485/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2035895 - already processed (2486/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2037256 - already processed (2487/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2043043 - already processed (2488/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2031888 - already processed (2489/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2043344 - already processed (2490/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2043890 - already processed (2491/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 1936780 - already processed (2492/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2023141 - already processed (2493/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2022467 - already processed (2494/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 2022582 - already processed (2495/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 1970488 - already processed (2496/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 1988006 - already processed (2497/2605) 2025-12-01 13:21:03,858 [INFO] Skipping bill 1933954 - already processed (2498/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 1955921 - already processed (2499/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 1963338 - already processed (2500/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2015697 - already processed (2501/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2020008 - already processed (2502/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2021940 - already processed (2503/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2022593 - already processed (2504/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2026569 - already processed (2505/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2027464 - already processed (2506/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2018800 - already processed (2507/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2028784 - already processed (2508/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2029580 - already processed (2509/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2031938 - already processed (2510/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2032128 - already processed (2511/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 1947775 - already processed (2512/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2035420 - already processed (2513/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2037229 - already processed (2514/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2039570 - already processed (2515/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2042103 - already processed (2516/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2043758 - already processed (2517/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2046719 - already processed (2518/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2052024 - already processed (2519/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2052050 - already processed (2520/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 1979616 - already processed (2521/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2053486 - already processed (2522/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2019782 - already processed (2523/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2017847 - already processed (2524/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2018869 - already processed (2525/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2040352 - already processed (2526/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2029980 - already processed (2527/2605) 2025-12-01 13:21:03,859 [INFO] Skipping bill 2018578 - already processed (2528/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2043696 - already processed (2529/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2008600 - already processed (2530/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2037247 - already processed (2531/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2037249 - already processed (2532/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2035609 - already processed (2533/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2038921 - already processed (2534/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2053374 - already processed (2535/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2021715 - already processed (2536/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2021641 - already processed (2537/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 1901818 - already processed (2538/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2023062 - already processed (2539/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2044841 - already processed (2540/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2043173 - already processed (2541/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 1948187 - already processed (2542/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2038257 - already processed (2543/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2053381 - already processed (2544/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2053499 - already processed (2545/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2053841 - already processed (2546/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2054336 - already processed (2547/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2054344 - already processed (2548/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2037277 - already processed (2549/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 1941772 - already processed (2550/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2043199 - already processed (2551/2605) 2025-12-01 13:21:03,860 [INFO] Skipping bill 2041162 - already processed (2552/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2038970 - already processed (2553/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2039918 - already processed (2554/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2032140 - already processed (2555/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2029941 - already processed (2556/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2038420 - already processed (2557/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 1943770 - already processed (2558/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 1979653 - already processed (2559/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 1970677 - already processed (2560/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 1988332 - already processed (2561/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 1939613 - already processed (2562/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2043104 - already processed (2563/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2000425 - already processed (2564/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2028805 - already processed (2565/2605) 2025-12-01 13:21:03,861 [INFO] Skipping bill 2023111 - already processed (2566/2605) 2025-12-01 13:21:03,861 [INFO] Processing 2567/2605: Bill ID 2032901 2025-12-01 13:21:04,993 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:21:04,996 [ERROR] Failed to generate report for bill 2032901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:21:06,009 [INFO] Skipping bill 2051603 - already processed (2568/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 2036437 - already processed (2569/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 2036475 - already processed (2570/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 2032059 - already processed (2571/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 2007053 - already processed (2572/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 2000456 - already processed (2573/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 1958611 - already processed (2574/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 2016811 - already processed (2575/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 1926891 - already processed (2576/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 1943799 - already processed (2577/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 2039061 - already processed (2578/2605) 2025-12-01 13:21:06,010 [INFO] Skipping bill 1961580 - already processed (2579/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 1927000 - already processed (2580/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2023233 - already processed (2581/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 1947802 - already processed (2582/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2022615 - already processed (2583/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2022439 - already processed (2584/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2033390 - already processed (2585/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2026636 - already processed (2586/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2047438 - already processed (2587/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2036925 - already processed (2588/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 1963365 - already processed (2589/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2043448 - already processed (2590/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 1994349 - already processed (2591/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2023224 - already processed (2592/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2028140 - already processed (2593/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2032003 - already processed (2594/2605) 2025-12-01 13:21:06,011 [INFO] Skipping bill 2039157 - already processed (2595/2605) 2025-12-01 13:21:06,012 [INFO] Skipping bill 2044179 - already processed (2596/2605) 2025-12-01 13:21:06,012 [INFO] Skipping bill 2035673 - already processed (2597/2605) 2025-12-01 13:21:06,012 [INFO] Skipping bill 2044473 - already processed (2598/2605) 2025-12-01 13:21:06,012 [INFO] Processing 2599/2605: Bill ID 1990400 2025-12-01 13:21:06,732 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:21:06,734 [ERROR] Failed to generate report for bill 1990400: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:21:07,744 [INFO] Skipping bill 2027724 - already processed (2600/2605) 2025-12-01 13:21:07,745 [INFO] Processing 2601/2605: Bill ID 2028171 2025-12-01 13:21:08,270 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:21:08,271 [ERROR] Failed to generate report for bill 2028171: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134543 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134543 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:21:09,281 [INFO] Processing 2602/2605: Bill ID 1966444 2025-12-01 13:21:09,805 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:21:09,808 [ERROR] Failed to generate report for bill 1966444: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:21:10,815 [INFO] Processing 2603/2605: Bill ID 2038906 2025-12-01 13:21:11,342 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:21:11,344 [ERROR] Failed to generate report for bill 2038906: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:21:12,354 [INFO] Processing 2604/2605: Bill ID 1994544 2025-12-01 13:21:12,949 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-01 13:21:12,952 [ERROR] Failed to generate report for bill 1994544: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-01 13:21:13,962 [INFO] Skipping bill 2041289 - already processed (2605/2605) 2025-12-01 13:21:14,021 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-01 13:21:14,021 [INFO] Report generation complete! 2025-12-01 13:21:14,021 [INFO] Total bills: 2605 2025-12-01 13:21:14,021 [INFO] Successfully processed: 0 2025-12-01 13:21:14,021 [INFO] Skipped (already done): 2487 2025-12-01 13:21:14,021 [INFO] Errors: 118 2025-12-03 11:03:49,126 [INFO] Loaded 2605 existing reports from data/bill_reports.json 2025-12-03 11:03:49,127 [INFO] Starting report generation for 2608 bills 2025-12-03 11:03:49,127 [INFO] Skipping bill 1769530 - already processed (1/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1765118 - already processed (2/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1745017 - already processed (3/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1745230 - already processed (4/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1847915 - already processed (5/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1847210 - already processed (6/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1847980 - already processed (7/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1840627 - already processed (8/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1840340 - already processed (9/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 2019785 - already processed (10/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1983607 - already processed (11/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 2019702 - already processed (12/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1987220 - already processed (13/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 2022389 - already processed (14/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1959465 - already processed (15/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 2023982 - already processed (16/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 2019732 - already processed (17/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1969654 - already processed (18/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1956622 - already processed (19/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1957166 - already processed (20/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1869518 - already processed (21/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1813560 - already processed (22/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1836190 - already processed (23/2608) 2025-12-03 11:03:49,127 [INFO] Skipping bill 1851112 - already processed (24/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1745943 - already processed (25/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1737840 - already processed (26/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1814309 - already processed (27/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1851143 - already processed (28/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1984991 - already processed (29/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1912439 - already processed (30/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1912476 - already processed (31/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1940708 - already processed (32/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1935103 - already processed (33/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1685926 - already processed (34/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1657717 - already processed (35/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1683096 - already processed (36/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1828964 - already processed (37/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1830782 - already processed (38/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1829010 - already processed (39/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1810349 - already processed (40/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1810356 - already processed (41/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1804209 - already processed (42/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1830673 - already processed (43/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1923768 - already processed (44/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1935042 - already processed (45/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1948089 - already processed (46/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1917064 - already processed (47/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1964274 - already processed (48/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1949161 - already processed (49/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1938396 - already processed (50/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1955446 - already processed (51/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1946736 - already processed (52/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 2037727 - already processed (53/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1730253 - already processed (54/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1721706 - already processed (55/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1975090 - already processed (56/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1946146 - already processed (57/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 2018186 - already processed (58/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 2011735 - already processed (59/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1897622 - already processed (60/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1973543 - already processed (61/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 2009462 - already processed (62/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 2011658 - already processed (63/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1944017 - already processed (64/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 1892641 - already processed (65/2608) 2025-12-03 11:03:49,128 [INFO] Skipping bill 2010078 - already processed (66/2608) 2025-12-03 11:03:49,129 [INFO] Skipping bill 1915632 - already processed (67/2608) 2025-12-03 11:03:49,129 [INFO] Skipping bill 1996393 - already processed (68/2608) 2025-12-03 11:03:49,129 [INFO] Processing 69/2608: Bill ID 1972479 2025-12-03 11:03:51,190 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:03:51,193 [ERROR] Failed to generate report for bill 1972479: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 512372 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:03:52,208 [INFO] Skipping bill 1848589 - already processed (70/2608) 2025-12-03 11:03:52,209 [INFO] Skipping bill 1796695 - already processed (71/2608) 2025-12-03 11:03:52,209 [INFO] Skipping bill 1834299 - already processed (72/2608) 2025-12-03 11:03:52,209 [INFO] Skipping bill 1840453 - already processed (73/2608) 2025-12-03 11:03:52,209 [INFO] Skipping bill 1847401 - already processed (74/2608) 2025-12-03 11:03:52,209 [INFO] Skipping bill 1849339 - already processed (75/2608) 2025-12-03 11:03:52,210 [INFO] Skipping bill 1845122 - already processed (76/2608) 2025-12-03 11:03:52,210 [INFO] Skipping bill 1796692 - already processed (77/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1846289 - already processed (78/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1813231 - already processed (79/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1848433 - already processed (80/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1796691 - already processed (81/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1848536 - already processed (82/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1819737 - already processed (83/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1829037 - already processed (84/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1712200 - already processed (85/2608) 2025-12-03 11:03:52,211 [INFO] Skipping bill 1848424 - already processed (86/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1814918 - already processed (87/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1686429 - already processed (88/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1848359 - already processed (89/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1697069 - already processed (90/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1848453 - already processed (91/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1849513 - already processed (92/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1848521 - already processed (93/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1848425 - already processed (94/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1702816 - already processed (95/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1849367 - already processed (96/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1849520 - already processed (97/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1848530 - already processed (98/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1712027 - already processed (99/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1849659 - already processed (100/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1848478 - already processed (101/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1848387 - already processed (102/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1845137 - already processed (103/2608) 2025-12-03 11:03:52,212 [INFO] Skipping bill 1812205 - already processed (104/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1798416 - already processed (105/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1847351 - already processed (106/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1693943 - already processed (107/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1686454 - already processed (108/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1847404 - already processed (109/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1683775 - already processed (110/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1835452 - already processed (111/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1709727 - already processed (112/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1849724 - already processed (113/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1761500 - already processed (114/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1697048 - already processed (115/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1860070 - already processed (116/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1771300 - already processed (117/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1709708 - already processed (118/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1848529 - already processed (119/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1845179 - already processed (120/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1849404 - already processed (121/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1714444 - already processed (122/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1824468 - already processed (123/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1882346 - already processed (124/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1885654 - already processed (125/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1849359 - already processed (126/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1840414 - already processed (127/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1846229 - already processed (128/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1707510 - already processed (129/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1845188 - already processed (130/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1848524 - already processed (131/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1847496 - already processed (132/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1883008 - already processed (133/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1649620 - already processed (134/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1667841 - already processed (135/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1848476 - already processed (136/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1649670 - already processed (137/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1667891 - already processed (138/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1649612 - already processed (139/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1649615 - already processed (140/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1667833 - already processed (141/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1667836 - already processed (142/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1649618 - already processed (143/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1667839 - already processed (144/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1649630 - already processed (145/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1649619 - already processed (146/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1667851 - already processed (147/2608) 2025-12-03 11:03:52,213 [INFO] Skipping bill 1667840 - already processed (148/2608) 2025-12-03 11:03:52,214 [INFO] Processing 149/2608: Bill ID 1865211 2025-12-03 11:03:53,851 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:03:53,855 [ERROR] Failed to generate report for bill 1865211: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 241283 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:03:54,866 [INFO] Skipping bill 1667837 - already processed (150/2608) 2025-12-03 11:03:54,866 [INFO] Skipping bill 1667892 - already processed (151/2608) 2025-12-03 11:03:54,866 [INFO] Skipping bill 1649616 - already processed (152/2608) 2025-12-03 11:03:54,867 [INFO] Skipping bill 1649671 - already processed (153/2608) 2025-12-03 11:03:54,867 [INFO] Processing 154/2608: Bill ID 1726105 2025-12-03 11:03:56,004 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:03:56,006 [ERROR] Failed to generate report for bill 1726105: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 343953 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:03:57,016 [INFO] Skipping bill 1978757 - already processed (155/2608) 2025-12-03 11:03:57,016 [INFO] Skipping bill 1980543 - already processed (156/2608) 2025-12-03 11:03:57,017 [INFO] Skipping bill 1893423 - already processed (157/2608) 2025-12-03 11:03:57,017 [INFO] Skipping bill 1964699 - already processed (158/2608) 2025-12-03 11:03:57,017 [INFO] Skipping bill 1978599 - already processed (159/2608) 2025-12-03 11:03:57,017 [INFO] Skipping bill 1980563 - already processed (160/2608) 2025-12-03 11:03:57,017 [INFO] Skipping bill 1976585 - already processed (161/2608) 2025-12-03 11:03:57,017 [INFO] Skipping bill 1904800 - already processed (162/2608) 2025-12-03 11:03:57,017 [INFO] Skipping bill 1974530 - already processed (163/2608) 2025-12-03 11:03:57,017 [INFO] Skipping bill 1964676 - already processed (164/2608) 2025-12-03 11:03:57,018 [INFO] Skipping bill 1955758 - already processed (165/2608) 2025-12-03 11:03:57,018 [INFO] Skipping bill 1941749 - already processed (166/2608) 2025-12-03 11:03:57,018 [INFO] Skipping bill 1976440 - already processed (167/2608) 2025-12-03 11:03:57,018 [INFO] Skipping bill 1978812 - already processed (168/2608) 2025-12-03 11:03:57,018 [INFO] Skipping bill 1978731 - already processed (169/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1949687 - already processed (170/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1980302 - already processed (171/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 2032041 - already processed (172/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1978672 - already processed (173/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1955756 - already processed (174/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1970455 - already processed (175/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1978694 - already processed (176/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1976550 - already processed (177/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1908207 - already processed (178/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1971712 - already processed (179/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1919273 - already processed (180/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1893452 - already processed (181/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1971760 - already processed (182/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1978553 - already processed (183/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1980501 - already processed (184/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1980139 - already processed (185/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1908210 - already processed (186/2608) 2025-12-03 11:03:57,020 [INFO] Skipping bill 1980228 - already processed (187/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1947445 - already processed (188/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1971753 - already processed (189/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1943407 - already processed (190/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1896630 - already processed (191/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1953097 - already processed (192/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1961095 - already processed (193/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1953091 - already processed (194/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1953081 - already processed (195/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1978871 - already processed (196/2608) 2025-12-03 11:03:57,021 [INFO] Skipping bill 1990396 - already processed (197/2608) 2025-12-03 11:03:57,021 [INFO] Processing 198/2608: Bill ID 1980067 2025-12-03 11:03:57,948 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:03:57,950 [ERROR] Failed to generate report for bill 1980067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 270166 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:03:58,959 [INFO] Skipping bill 1970450 - already processed (199/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 1904793 - already processed (200/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 1964689 - already processed (201/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 1933300 - already processed (202/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 2036404 - already processed (203/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 1949685 - already processed (204/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 1976474 - already processed (205/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 1898373 - already processed (206/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 2042443 - already processed (207/2608) 2025-12-03 11:03:58,960 [INFO] Skipping bill 2005483 - already processed (208/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1968261 - already processed (209/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1980234 - already processed (210/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1978559 - already processed (211/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1974545 - already processed (212/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1908089 - already processed (213/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1939198 - already processed (214/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1939199 - already processed (215/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1908087 - already processed (216/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1908088 - already processed (217/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1939200 - already processed (218/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1939201 - already processed (219/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1908090 - already processed (220/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1939197 - already processed (221/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1908086 - already processed (222/2608) 2025-12-03 11:03:58,961 [INFO] Skipping bill 1651326 - already processed (223/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1747628 - already processed (224/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1871619 - already processed (225/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1874953 - already processed (226/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1831016 - already processed (227/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1846007 - already processed (228/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 2026977 - already processed (229/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 2042502 - already processed (230/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 2042537 - already processed (231/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 2042540 - already processed (232/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1907590 - already processed (233/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1907863 - already processed (234/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 2022323 - already processed (235/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1947638 - already processed (236/2608) 2025-12-03 11:03:58,962 [INFO] Skipping bill 1965815 - already processed (237/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 2042471 - already processed (238/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 2017117 - already processed (239/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 1973900 - already processed (240/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 2020829 - already processed (241/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 1718823 - already processed (242/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 1709526 - already processed (243/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 1709356 - already processed (244/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 1839016 - already processed (245/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 1859941 - already processed (246/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 1839023 - already processed (247/2608) 2025-12-03 11:03:58,963 [INFO] Skipping bill 1860727 - already processed (248/2608) 2025-12-03 11:03:58,963 [INFO] Processing 249/2608: Bill ID 1876979 2025-12-03 11:03:59,877 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:03:59,879 [ERROR] Failed to generate report for bill 1876979: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150875 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:00,886 [INFO] Skipping bill 1905069 - already processed (250/2608) 2025-12-03 11:04:00,886 [INFO] Skipping bill 1992824 - already processed (251/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1957876 - already processed (252/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1965500 - already processed (253/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1990151 - already processed (254/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1949174 - already processed (255/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1905038 - already processed (256/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1905159 - already processed (257/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1907650 - already processed (258/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1909616 - already processed (259/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1909665 - already processed (260/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1928585 - already processed (261/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1928759 - already processed (262/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1928904 - already processed (263/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1931737 - already processed (264/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1928076 - already processed (265/2608) 2025-12-03 11:04:00,887 [INFO] Skipping bill 1935956 - already processed (266/2608) 2025-12-03 11:04:00,888 [INFO] Skipping bill 1905222 - already processed (267/2608) 2025-12-03 11:04:00,888 [INFO] Skipping bill 1932777 - already processed (268/2608) 2025-12-03 11:04:00,888 [INFO] Skipping bill 1905141 - already processed (269/2608) 2025-12-03 11:04:00,888 [INFO] Processing 270/2608: Bill ID 2034928 2025-12-03 11:04:02,312 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:02,313 [ERROR] Failed to generate report for bill 2034928: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 412715 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:02,372 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:04:02,372 [INFO] Progress: 270/2608 - Processed: 0, Skipped: 264, Errors: 6 2025-12-03 11:04:03,378 [INFO] Skipping bill 1820947 - already processed (271/2608) 2025-12-03 11:04:03,379 [INFO] Skipping bill 2038143 - already processed (272/2608) 2025-12-03 11:04:03,379 [INFO] Skipping bill 1946119 - already processed (273/2608) 2025-12-03 11:04:03,379 [INFO] Skipping bill 2038726 - already processed (274/2608) 2025-12-03 11:04:03,380 [INFO] Skipping bill 2015494 - already processed (275/2608) 2025-12-03 11:04:03,380 [INFO] Skipping bill 1754732 - already processed (276/2608) 2025-12-03 11:04:03,380 [INFO] Skipping bill 1716623 - already processed (277/2608) 2025-12-03 11:04:03,380 [INFO] Skipping bill 1723029 - already processed (278/2608) 2025-12-03 11:04:03,380 [INFO] Skipping bill 1749221 - already processed (279/2608) 2025-12-03 11:04:03,380 [INFO] Skipping bill 1756757 - already processed (280/2608) 2025-12-03 11:04:03,381 [INFO] Skipping bill 1722774 - already processed (281/2608) 2025-12-03 11:04:03,381 [INFO] Processing 282/2608: Bill ID 1746175 2025-12-03 11:04:04,591 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:04,593 [ERROR] Failed to generate report for bill 1746175: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 482085 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:05,601 [INFO] Skipping bill 1749049 - already processed (283/2608) 2025-12-03 11:04:05,601 [INFO] Skipping bill 1799517 - already processed (284/2608) 2025-12-03 11:04:05,601 [INFO] Skipping bill 1799058 - already processed (285/2608) 2025-12-03 11:04:05,602 [INFO] Skipping bill 1792427 - already processed (286/2608) 2025-12-03 11:04:05,602 [INFO] Skipping bill 1791537 - already processed (287/2608) 2025-12-03 11:04:05,602 [INFO] Skipping bill 1793699 - already processed (288/2608) 2025-12-03 11:04:05,602 [INFO] Skipping bill 1784035 - already processed (289/2608) 2025-12-03 11:04:05,604 [INFO] Skipping bill 1789608 - already processed (290/2608) 2025-12-03 11:04:05,604 [INFO] Skipping bill 1797287 - already processed (291/2608) 2025-12-03 11:04:05,604 [INFO] Skipping bill 1799146 - already processed (292/2608) 2025-12-03 11:04:05,604 [INFO] Skipping bill 1799256 - already processed (293/2608) 2025-12-03 11:04:05,604 [INFO] Skipping bill 1799530 - already processed (294/2608) 2025-12-03 11:04:05,605 [INFO] Skipping bill 1799073 - already processed (295/2608) 2025-12-03 11:04:05,605 [INFO] Skipping bill 1798525 - already processed (296/2608) 2025-12-03 11:04:05,605 [INFO] Skipping bill 1812862 - already processed (297/2608) 2025-12-03 11:04:05,605 [INFO] Skipping bill 1799556 - already processed (298/2608) 2025-12-03 11:04:05,605 [INFO] Skipping bill 1793796 - already processed (299/2608) 2025-12-03 11:04:05,605 [INFO] Skipping bill 1840899 - already processed (300/2608) 2025-12-03 11:04:05,605 [INFO] Skipping bill 1849855 - already processed (301/2608) 2025-12-03 11:04:05,605 [INFO] Skipping bill 1796581 - already processed (302/2608) 2025-12-03 11:04:05,606 [INFO] Skipping bill 1785974 - already processed (303/2608) 2025-12-03 11:04:05,606 [INFO] Skipping bill 1799599 - already processed (304/2608) 2025-12-03 11:04:05,606 [INFO] Skipping bill 1799188 - already processed (305/2608) 2025-12-03 11:04:05,606 [INFO] Skipping bill 1834738 - already processed (306/2608) 2025-12-03 11:04:05,606 [INFO] Skipping bill 1799528 - already processed (307/2608) 2025-12-03 11:04:05,606 [INFO] Processing 308/2608: Bill ID 1829539 2025-12-03 11:04:07,265 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:07,267 [ERROR] Failed to generate report for bill 1829539: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 487138 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:08,275 [INFO] Skipping bill 1953506 - already processed (309/2608) 2025-12-03 11:04:08,276 [INFO] Skipping bill 1969171 - already processed (310/2608) 2025-12-03 11:04:08,276 [INFO] Skipping bill 1963529 - already processed (311/2608) 2025-12-03 11:04:08,276 [INFO] Skipping bill 1973172 - already processed (312/2608) 2025-12-03 11:04:08,276 [INFO] Skipping bill 1977164 - already processed (313/2608) 2025-12-03 11:04:08,276 [INFO] Skipping bill 1984764 - already processed (314/2608) 2025-12-03 11:04:08,277 [INFO] Skipping bill 1988421 - already processed (315/2608) 2025-12-03 11:04:08,277 [INFO] Skipping bill 1963407 - already processed (316/2608) 2025-12-03 11:04:08,277 [INFO] Skipping bill 1977647 - already processed (317/2608) 2025-12-03 11:04:08,277 [INFO] Skipping bill 1985537 - already processed (318/2608) 2025-12-03 11:04:08,277 [INFO] Skipping bill 1988809 - already processed (319/2608) 2025-12-03 11:04:08,277 [INFO] Skipping bill 1989241 - already processed (320/2608) 2025-12-03 11:04:08,277 [INFO] Skipping bill 1980688 - already processed (321/2608) 2025-12-03 11:04:08,278 [INFO] Skipping bill 1985490 - already processed (322/2608) 2025-12-03 11:04:08,278 [INFO] Skipping bill 1987236 - already processed (323/2608) 2025-12-03 11:04:08,278 [INFO] Skipping bill 2009168 - already processed (324/2608) 2025-12-03 11:04:08,278 [INFO] Skipping bill 1985684 - already processed (325/2608) 2025-12-03 11:04:08,278 [INFO] Skipping bill 1982957 - already processed (326/2608) 2025-12-03 11:04:08,278 [INFO] Skipping bill 2009660 - already processed (327/2608) 2025-12-03 11:04:08,278 [INFO] Skipping bill 1987290 - already processed (328/2608) 2025-12-03 11:04:08,279 [INFO] Skipping bill 2021527 - already processed (329/2608) 2025-12-03 11:04:08,279 [INFO] Skipping bill 1984006 - already processed (330/2608) 2025-12-03 11:04:08,279 [INFO] Skipping bill 1944378 - already processed (331/2608) 2025-12-03 11:04:08,279 [INFO] Processing 332/2608: Bill ID 2016312 2025-12-03 11:04:09,825 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:09,827 [ERROR] Failed to generate report for bill 2016312: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 508553 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:10,838 [INFO] Skipping bill 1975511 - already processed (333/2608) 2025-12-03 11:04:10,839 [INFO] Skipping bill 1807866 - already processed (334/2608) 2025-12-03 11:04:10,841 [INFO] Skipping bill 1825040 - already processed (335/2608) 2025-12-03 11:04:10,841 [INFO] Skipping bill 1824663 - already processed (336/2608) 2025-12-03 11:04:10,842 [INFO] Skipping bill 1827759 - already processed (337/2608) 2025-12-03 11:04:10,842 [INFO] Skipping bill 1807849 - already processed (338/2608) 2025-12-03 11:04:10,842 [INFO] Skipping bill 1852469 - already processed (339/2608) 2025-12-03 11:04:10,843 [INFO] Skipping bill 1724818 - already processed (340/2608) 2025-12-03 11:04:10,843 [INFO] Skipping bill 1827801 - already processed (341/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1842042 - already processed (342/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1800509 - already processed (343/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1829048 - already processed (344/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1691393 - already processed (345/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1684843 - already processed (346/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1945161 - already processed (347/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1947679 - already processed (348/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1943273 - already processed (349/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1919150 - already processed (350/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 2012228 - already processed (351/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1990355 - already processed (352/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1960995 - already processed (353/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1968119 - already processed (354/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 2006978 - already processed (355/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974144 - already processed (356/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974243 - already processed (357/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974425 - already processed (358/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 2016144 - already processed (359/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974177 - already processed (360/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974222 - already processed (361/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974239 - already processed (362/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974292 - already processed (363/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974356 - already processed (364/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974381 - already processed (365/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974418 - already processed (366/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1990318 - already processed (367/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1987837 - already processed (368/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1974421 - already processed (369/2608) 2025-12-03 11:04:10,844 [INFO] Skipping bill 1982057 - already processed (370/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1968164 - already processed (371/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1979990 - already processed (372/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1961023 - already processed (373/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1970366 - already processed (374/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1976266 - already processed (375/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1735435 - already processed (376/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1735103 - already processed (377/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1735239 - already processed (378/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1676639 - already processed (379/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1822936 - already processed (380/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1824099 - already processed (381/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1823066 - already processed (382/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1821100 - already processed (383/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1821376 - already processed (384/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1861884 - already processed (385/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1862091 - already processed (386/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1824408 - already processed (387/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1823094 - already processed (388/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1859976 - already processed (389/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1860020 - already processed (390/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1822457 - already processed (391/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1823240 - already processed (392/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1822425 - already processed (393/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1823305 - already processed (394/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1816605 - already processed (395/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1822519 - already processed (396/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1822760 - already processed (397/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1821542 - already processed (398/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1862395 - already processed (399/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1862180 - already processed (400/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1820992 - already processed (401/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1822908 - already processed (402/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1816124 - already processed (403/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1826161 - already processed (404/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1822451 - already processed (405/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1823328 - already processed (406/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1860844 - already processed (407/2608) 2025-12-03 11:04:10,845 [INFO] Skipping bill 1819671 - already processed (408/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1815658 - already processed (409/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1929168 - already processed (410/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1939103 - already processed (411/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1939150 - already processed (412/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1924410 - already processed (413/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1929804 - already processed (414/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1929561 - already processed (415/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1925992 - already processed (416/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1928926 - already processed (417/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1931961 - already processed (418/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1929636 - already processed (419/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1909994 - already processed (420/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1928408 - already processed (421/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1928598 - already processed (422/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1994243 - already processed (423/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1994303 - already processed (424/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1929659 - already processed (425/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1932766 - already processed (426/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1928570 - already processed (427/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1934608 - already processed (428/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1928364 - already processed (429/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1929760 - already processed (430/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1933272 - already processed (431/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1929496 - already processed (432/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1990347 - already processed (433/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1995251 - already processed (434/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1995449 - already processed (435/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1995259 - already processed (436/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1995271 - already processed (437/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1995747 - already processed (438/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1991557 - already processed (439/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1991563 - already processed (440/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1995783 - already processed (441/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1929457 - already processed (442/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1915997 - already processed (443/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1933178 - already processed (444/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1992758 - already processed (445/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1993026 - already processed (446/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1995569 - already processed (447/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1992805 - already processed (448/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1995900 - already processed (449/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1993019 - already processed (450/2608) 2025-12-03 11:04:10,846 [INFO] Skipping bill 1847870 - already processed (451/2608) 2025-12-03 11:04:10,847 [INFO] Skipping bill 1812600 - already processed (452/2608) 2025-12-03 11:04:10,847 [INFO] Skipping bill 1848008 - already processed (453/2608) 2025-12-03 11:04:10,847 [INFO] Skipping bill 1825516 - already processed (454/2608) 2025-12-03 11:04:10,847 [INFO] Processing 455/2608: Bill ID 1845026 2025-12-03 11:04:11,323 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:11,325 [ERROR] Failed to generate report for bill 1845026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 153566 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:12,331 [INFO] Skipping bill 1962312 - already processed (456/2608) 2025-12-03 11:04:12,331 [INFO] Skipping bill 1954011 - already processed (457/2608) 2025-12-03 11:04:12,332 [INFO] Skipping bill 1991380 - already processed (458/2608) 2025-12-03 11:04:12,332 [INFO] Processing 459/2608: Bill ID 2011846 2025-12-03 11:04:12,794 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:12,795 [ERROR] Failed to generate report for bill 2011846: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 147671 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:13,804 [INFO] Skipping bill 1838778 - already processed (460/2608) 2025-12-03 11:04:13,804 [INFO] Skipping bill 1713666 - already processed (461/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1837146 - already processed (462/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1842401 - already processed (463/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1838992 - already processed (464/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1840748 - already processed (465/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1841780 - already processed (466/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1831504 - already processed (467/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1832905 - already processed (468/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1843072 - already processed (469/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1839869 - already processed (470/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1814012 - already processed (471/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1842520 - already processed (472/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1835262 - already processed (473/2608) 2025-12-03 11:04:13,805 [INFO] Skipping bill 1843020 - already processed (474/2608) 2025-12-03 11:04:13,806 [INFO] Skipping bill 1878243 - already processed (475/2608) 2025-12-03 11:04:13,807 [INFO] Skipping bill 1893072 - already processed (476/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1713755 - already processed (477/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1842316 - already processed (478/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1838852 - already processed (479/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1838748 - already processed (480/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1635340 - already processed (481/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1713127 - already processed (482/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1818470 - already processed (483/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1837189 - already processed (484/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1635556 - already processed (485/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1692465 - already processed (486/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1843326 - already processed (487/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1822203 - already processed (488/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1838434 - already processed (489/2608) 2025-12-03 11:04:13,808 [INFO] Skipping bill 1714042 - already processed (490/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1840824 - already processed (491/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1810043 - already processed (492/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1762665 - already processed (493/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1831619 - already processed (494/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1712988 - already processed (495/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1704077 - already processed (496/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1712903 - already processed (497/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1818714 - already processed (498/2608) 2025-12-03 11:04:13,809 [INFO] Skipping bill 1842743 - already processed (499/2608) 2025-12-03 11:04:13,809 [INFO] Processing 500/2608: Bill ID 1838518 2025-12-03 11:04:16,072 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:16,076 [ERROR] Failed to generate report for bill 1838518: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 853564 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:16,123 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:04:16,124 [INFO] Progress: 500/2608 - Processed: 0, Skipped: 488, Errors: 12 2025-12-03 11:04:17,129 [INFO] Processing 501/2608: Bill ID 1794181 2025-12-03 11:04:17,710 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:17,712 [ERROR] Failed to generate report for bill 1794181: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 151032 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:18,723 [INFO] Processing 502/2608: Bill ID 1708593 2025-12-03 11:04:19,273 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:19,279 [ERROR] Failed to generate report for bill 1708593: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139146 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:20,288 [INFO] Processing 503/2608: Bill ID 1704148 2025-12-03 11:04:22,316 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:22,318 [ERROR] Failed to generate report for bill 1704148: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823023 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:23,333 [INFO] Processing 504/2608: Bill ID 1704278 2025-12-03 11:04:25,184 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:25,187 [ERROR] Failed to generate report for bill 1704278: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 823015 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:26,196 [INFO] Skipping bill 1714051 - already processed (505/2608) 2025-12-03 11:04:26,197 [INFO] Skipping bill 1951980 - already processed (506/2608) 2025-12-03 11:04:26,197 [INFO] Skipping bill 1942546 - already processed (507/2608) 2025-12-03 11:04:26,197 [INFO] Skipping bill 1954662 - already processed (508/2608) 2025-12-03 11:04:26,197 [INFO] Skipping bill 1962278 - already processed (509/2608) 2025-12-03 11:04:26,197 [INFO] Skipping bill 1959604 - already processed (510/2608) 2025-12-03 11:04:26,197 [INFO] Skipping bill 1961963 - already processed (511/2608) 2025-12-03 11:04:26,197 [INFO] Skipping bill 1906420 - already processed (512/2608) 2025-12-03 11:04:26,198 [INFO] Skipping bill 1959700 - already processed (513/2608) 2025-12-03 11:04:26,198 [INFO] Skipping bill 1960223 - already processed (514/2608) 2025-12-03 11:04:26,198 [INFO] Skipping bill 1955104 - already processed (515/2608) 2025-12-03 11:04:26,198 [INFO] Skipping bill 1962582 - already processed (516/2608) 2025-12-03 11:04:26,199 [INFO] Skipping bill 1945671 - already processed (517/2608) 2025-12-03 11:04:26,199 [INFO] Skipping bill 1927329 - already processed (518/2608) 2025-12-03 11:04:26,199 [INFO] Skipping bill 1950703 - already processed (519/2608) 2025-12-03 11:04:26,199 [INFO] Skipping bill 1962488 - already processed (520/2608) 2025-12-03 11:04:26,199 [INFO] Skipping bill 1945525 - already processed (521/2608) 2025-12-03 11:04:26,199 [INFO] Skipping bill 1958920 - already processed (522/2608) 2025-12-03 11:04:26,199 [INFO] Skipping bill 1962097 - already processed (523/2608) 2025-12-03 11:04:26,199 [INFO] Skipping bill 1963192 - already processed (524/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1947169 - already processed (525/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1961929 - already processed (526/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1962057 - already processed (527/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1973797 - already processed (528/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1963087 - already processed (529/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1940139 - already processed (530/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1941211 - already processed (531/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1906434 - already processed (532/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1963178 - already processed (533/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1954188 - already processed (534/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1954475 - already processed (535/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1957381 - already processed (536/2608) 2025-12-03 11:04:26,200 [INFO] Skipping bill 1962329 - already processed (537/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 1962675 - already processed (538/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 1935756 - already processed (539/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 1945467 - already processed (540/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 1907066 - already processed (541/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 1985138 - already processed (542/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 1961501 - already processed (543/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 1962291 - already processed (544/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 2034790 - already processed (545/2608) 2025-12-03 11:04:26,201 [INFO] Skipping bill 2047690 - already processed (546/2608) 2025-12-03 11:04:26,202 [INFO] Skipping bill 2052256 - already processed (547/2608) 2025-12-03 11:04:26,202 [INFO] Skipping bill 1962885 - already processed (548/2608) 2025-12-03 11:04:26,202 [INFO] Skipping bill 1960413 - already processed (549/2608) 2025-12-03 11:04:26,202 [INFO] Skipping bill 1959956 - already processed (550/2608) 2025-12-03 11:04:26,202 [INFO] Processing 551/2608: Bill ID 1962986 2025-12-03 11:04:29,794 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:29,796 [ERROR] Failed to generate report for bill 1962986: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1167379 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:30,804 [INFO] Processing 552/2608: Bill ID 1960510 2025-12-03 11:04:31,329 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:31,331 [ERROR] Failed to generate report for bill 1960510: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 156228 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:32,342 [INFO] Skipping bill 1962952 - already processed (553/2608) 2025-12-03 11:04:32,342 [INFO] Processing 554/2608: Bill ID 1645841 2025-12-03 11:04:33,070 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:33,073 [ERROR] Failed to generate report for bill 1645841: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 162324 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:34,081 [INFO] Skipping bill 1799709 - already processed (555/2608) 2025-12-03 11:04:34,082 [INFO] Skipping bill 1797422 - already processed (556/2608) 2025-12-03 11:04:34,082 [INFO] Skipping bill 1801018 - already processed (557/2608) 2025-12-03 11:04:34,082 [INFO] Skipping bill 1799688 - already processed (558/2608) 2025-12-03 11:04:34,082 [INFO] Skipping bill 1909475 - already processed (559/2608) 2025-12-03 11:04:34,082 [INFO] Skipping bill 1921138 - already processed (560/2608) 2025-12-03 11:04:34,082 [INFO] Skipping bill 1917007 - already processed (561/2608) 2025-12-03 11:04:34,083 [INFO] Skipping bill 1921879 - already processed (562/2608) 2025-12-03 11:04:34,083 [INFO] Skipping bill 1915249 - already processed (563/2608) 2025-12-03 11:04:34,083 [INFO] Skipping bill 1912345 - already processed (564/2608) 2025-12-03 11:04:34,083 [INFO] Processing 565/2608: Bill ID 1897676 2025-12-03 11:04:34,812 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:34,813 [ERROR] Failed to generate report for bill 1897676: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 165130 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:35,823 [INFO] Skipping bill 1847772 - already processed (566/2608) 2025-12-03 11:04:35,823 [INFO] Skipping bill 1825218 - already processed (567/2608) 2025-12-03 11:04:35,823 [INFO] Skipping bill 1839463 - already processed (568/2608) 2025-12-03 11:04:35,823 [INFO] Skipping bill 1665194 - already processed (569/2608) 2025-12-03 11:04:35,823 [INFO] Skipping bill 1708118 - already processed (570/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1802090 - already processed (571/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1823725 - already processed (572/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1845657 - already processed (573/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1846612 - already processed (574/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1870077 - already processed (575/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1870897 - already processed (576/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1761153 - already processed (577/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1760883 - already processed (578/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1752922 - already processed (579/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1873484 - already processed (580/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1990915 - already processed (581/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1969038 - already processed (582/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1993838 - already processed (583/2608) 2025-12-03 11:04:35,824 [INFO] Skipping bill 1958795 - already processed (584/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 1977734 - already processed (585/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 1937592 - already processed (586/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 1963811 - already processed (587/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 2029033 - already processed (588/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 2026836 - already processed (589/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 2027180 - already processed (590/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 2021349 - already processed (591/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 2030059 - already processed (592/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 1823829 - already processed (593/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 1824037 - already processed (594/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 1850989 - already processed (595/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 1826921 - already processed (596/2608) 2025-12-03 11:04:35,825 [INFO] Skipping bill 1690087 - already processed (597/2608) 2025-12-03 11:04:35,825 [INFO] Processing 598/2608: Bill ID 1693524 2025-12-03 11:04:36,655 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:36,657 [ERROR] Failed to generate report for bill 1693524: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225348 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:37,666 [INFO] Skipping bill 1665637 - already processed (599/2608) 2025-12-03 11:04:37,666 [INFO] Skipping bill 1682635 - already processed (600/2608) 2025-12-03 11:04:37,666 [INFO] Processing 601/2608: Bill ID 1692213 2025-12-03 11:04:38,397 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:38,399 [ERROR] Failed to generate report for bill 1692213: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225670 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:39,406 [INFO] Processing 602/2608: Bill ID 1846626 2025-12-03 11:04:40,135 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:40,140 [ERROR] Failed to generate report for bill 1846626: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:41,150 [INFO] Processing 603/2608: Bill ID 1846675 2025-12-03 11:04:41,979 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:41,981 [ERROR] Failed to generate report for bill 1846675: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 225290 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:42,991 [INFO] Skipping bill 1653927 - already processed (604/2608) 2025-12-03 11:04:42,992 [INFO] Skipping bill 1959326 - already processed (605/2608) 2025-12-03 11:04:42,992 [INFO] Skipping bill 1948632 - already processed (606/2608) 2025-12-03 11:04:42,992 [INFO] Skipping bill 1955060 - already processed (607/2608) 2025-12-03 11:04:42,992 [INFO] Skipping bill 1946546 - already processed (608/2608) 2025-12-03 11:04:42,992 [INFO] Processing 609/2608: Bill ID 1916487 2025-12-03 11:04:43,820 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:43,821 [ERROR] Failed to generate report for bill 1916487: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242611 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:44,832 [INFO] Skipping bill 1949165 - already processed (610/2608) 2025-12-03 11:04:44,833 [INFO] Processing 611/2608: Bill ID 1938020 2025-12-03 11:04:45,665 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:45,667 [ERROR] Failed to generate report for bill 1938020: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238559 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:46,677 [INFO] Processing 612/2608: Bill ID 1937464 2025-12-03 11:04:47,653 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:47,656 [ERROR] Failed to generate report for bill 1937464: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 238890 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:48,664 [INFO] Processing 613/2608: Bill ID 1713253 2025-12-03 11:04:49,249 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:49,252 [ERROR] Failed to generate report for bill 1713253: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 176351 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:50,262 [INFO] Skipping bill 1804283 - already processed (614/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1795473 - already processed (615/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1855405 - already processed (616/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1848823 - already processed (617/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1842483 - already processed (618/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1854786 - already processed (619/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1795485 - already processed (620/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1854739 - already processed (621/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1799043 - already processed (622/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1974284 - already processed (623/2608) 2025-12-03 11:04:50,263 [INFO] Skipping bill 1974163 - already processed (624/2608) 2025-12-03 11:04:50,264 [INFO] Skipping bill 1994222 - already processed (625/2608) 2025-12-03 11:04:50,264 [INFO] Skipping bill 1970124 - already processed (626/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1908054 - already processed (627/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1904666 - already processed (628/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1975714 - already processed (629/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1974214 - already processed (630/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1765786 - already processed (631/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1751941 - already processed (632/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1747213 - already processed (633/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1872579 - already processed (634/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1831630 - already processed (635/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1869553 - already processed (636/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1856482 - already processed (637/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1877177 - already processed (638/2608) 2025-12-03 11:04:50,265 [INFO] Skipping bill 1856535 - already processed (639/2608) 2025-12-03 11:04:50,266 [INFO] Processing 640/2608: Bill ID 1856106 2025-12-03 11:04:50,766 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:50,767 [ERROR] Failed to generate report for bill 1856106: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 139494 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:50,813 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:04:50,813 [INFO] Progress: 640/2608 - Processed: 0, Skipped: 611, Errors: 29 2025-12-03 11:04:51,817 [INFO] Skipping bill 2036140 - already processed (641/2608) 2025-12-03 11:04:51,818 [INFO] Skipping bill 2013841 - already processed (642/2608) 2025-12-03 11:04:51,818 [INFO] Skipping bill 2036152 - already processed (643/2608) 2025-12-03 11:04:51,818 [INFO] Skipping bill 2035054 - already processed (644/2608) 2025-12-03 11:04:51,818 [INFO] Skipping bill 2020836 - already processed (645/2608) 2025-12-03 11:04:51,818 [INFO] Skipping bill 2034414 - already processed (646/2608) 2025-12-03 11:04:51,819 [INFO] Skipping bill 2036147 - already processed (647/2608) 2025-12-03 11:04:51,819 [INFO] Skipping bill 2017245 - already processed (648/2608) 2025-12-03 11:04:51,819 [INFO] Processing 649/2608: Bill ID 2020366 2025-12-03 11:04:52,319 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:52,321 [ERROR] Failed to generate report for bill 2020366: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 138834 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:53,330 [INFO] Skipping bill 1754734 - already processed (650/2608) 2025-12-03 11:04:53,331 [INFO] Skipping bill 1766525 - already processed (651/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 1993701 - already processed (652/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 2024454 - already processed (653/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 1989654 - already processed (654/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 1923257 - already processed (655/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 2012930 - already processed (656/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 2022043 - already processed (657/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 1977885 - already processed (658/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 1903898 - already processed (659/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 2022085 - already processed (660/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 2024471 - already processed (661/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 1962449 - already processed (662/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 1948585 - already processed (663/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 2027763 - already processed (664/2608) 2025-12-03 11:04:53,332 [INFO] Skipping bill 2038183 - already processed (665/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 2012908 - already processed (666/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1703457 - already processed (667/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1703326 - already processed (668/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1703583 - already processed (669/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1703488 - already processed (670/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1694229 - already processed (671/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1697293 - already processed (672/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1694179 - already processed (673/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1707790 - already processed (674/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1691409 - already processed (675/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1679149 - already processed (676/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1697468 - already processed (677/2608) 2025-12-03 11:04:53,333 [INFO] Skipping bill 1703148 - already processed (678/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1835739 - already processed (679/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1840482 - already processed (680/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1842215 - already processed (681/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1838035 - already processed (682/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1842106 - already processed (683/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1839236 - already processed (684/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1839142 - already processed (685/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1838028 - already processed (686/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1837867 - already processed (687/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1835606 - already processed (688/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1825025 - already processed (689/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1826297 - already processed (690/2608) 2025-12-03 11:04:53,334 [INFO] Skipping bill 1847549 - already processed (691/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1839307 - already processed (692/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1842129 - already processed (693/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1837909 - already processed (694/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1797714 - already processed (695/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1839204 - already processed (696/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1835710 - already processed (697/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1837838 - already processed (698/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1837893 - already processed (699/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1835695 - already processed (700/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1837995 - already processed (701/2608) 2025-12-03 11:04:53,335 [INFO] Skipping bill 1842172 - already processed (702/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1817737 - already processed (703/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1953268 - already processed (704/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1961326 - already processed (705/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1961123 - already processed (706/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1953218 - already processed (707/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1945231 - already processed (708/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1949851 - already processed (709/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1945281 - already processed (710/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1945285 - already processed (711/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1949794 - already processed (712/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1949746 - already processed (713/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1949835 - already processed (714/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1961190 - already processed (715/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1953113 - already processed (716/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1936713 - already processed (717/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1939378 - already processed (718/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1909925 - already processed (719/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1961341 - already processed (720/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1922403 - already processed (721/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1899660 - already processed (722/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1961327 - already processed (723/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1953223 - already processed (724/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1953246 - already processed (725/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1955835 - already processed (726/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1933617 - already processed (727/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1945335 - already processed (728/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1961410 - already processed (729/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1926508 - already processed (730/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1943426 - already processed (731/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1949808 - already processed (732/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1949848 - already processed (733/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1947517 - already processed (734/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1945267 - already processed (735/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1961205 - already processed (736/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1953214 - already processed (737/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1943446 - already processed (738/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1973042 - already processed (739/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1961299 - already processed (740/2608) 2025-12-03 11:04:53,336 [INFO] Skipping bill 1933601 - already processed (741/2608) 2025-12-03 11:04:53,337 [INFO] Skipping bill 1933621 - already processed (742/2608) 2025-12-03 11:04:53,337 [INFO] Processing 743/2608: Bill ID 1919287 2025-12-03 11:04:53,856 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:53,857 [ERROR] Failed to generate report for bill 1919287: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 128427 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:54,867 [INFO] Skipping bill 1933460 - already processed (744/2608) 2025-12-03 11:04:54,868 [INFO] Skipping bill 1933670 - already processed (745/2608) 2025-12-03 11:04:54,868 [INFO] Skipping bill 1922377 - already processed (746/2608) 2025-12-03 11:04:54,868 [INFO] Skipping bill 1735361 - already processed (747/2608) 2025-12-03 11:04:54,869 [INFO] Skipping bill 1742559 - already processed (748/2608) 2025-12-03 11:04:54,869 [INFO] Skipping bill 1775856 - already processed (749/2608) 2025-12-03 11:04:54,870 [INFO] Skipping bill 1738097 - already processed (750/2608) 2025-12-03 11:04:54,870 [INFO] Skipping bill 1794760 - already processed (751/2608) 2025-12-03 11:04:54,870 [INFO] Skipping bill 1736131 - already processed (752/2608) 2025-12-03 11:04:54,870 [INFO] Skipping bill 1885778 - already processed (753/2608) 2025-12-03 11:04:54,871 [INFO] Skipping bill 1808592 - already processed (754/2608) 2025-12-03 11:04:54,871 [INFO] Skipping bill 1878825 - already processed (755/2608) 2025-12-03 11:04:54,871 [INFO] Skipping bill 1884638 - already processed (756/2608) 2025-12-03 11:04:54,871 [INFO] Skipping bill 1738996 - already processed (757/2608) 2025-12-03 11:04:54,871 [INFO] Skipping bill 1878228 - already processed (758/2608) 2025-12-03 11:04:54,871 [INFO] Skipping bill 1872865 - already processed (759/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1881167 - already processed (760/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1881743 - already processed (761/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1852772 - already processed (762/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1884104 - already processed (763/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1738794 - already processed (764/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1893080 - already processed (765/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1881922 - already processed (766/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1883178 - already processed (767/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1881587 - already processed (768/2608) 2025-12-03 11:04:54,872 [INFO] Skipping bill 1884487 - already processed (769/2608) 2025-12-03 11:04:54,873 [INFO] Skipping bill 1859182 - already processed (770/2608) 2025-12-03 11:04:54,873 [INFO] Skipping bill 1866861 - already processed (771/2608) 2025-12-03 11:04:54,873 [INFO] Processing 772/2608: Bill ID 1891836 2025-12-03 11:04:55,493 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:55,496 [ERROR] Failed to generate report for bill 1891836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 144997 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:56,505 [INFO] Skipping bill 1883738 - already processed (773/2608) 2025-12-03 11:04:56,506 [INFO] Skipping bill 1682652 - already processed (774/2608) 2025-12-03 11:04:56,506 [INFO] Skipping bill 1742464 - already processed (775/2608) 2025-12-03 11:04:56,506 [INFO] Skipping bill 1728366 - already processed (776/2608) 2025-12-03 11:04:56,506 [INFO] Skipping bill 1726524 - already processed (777/2608) 2025-12-03 11:04:56,506 [INFO] Skipping bill 1737208 - already processed (778/2608) 2025-12-03 11:04:56,506 [INFO] Skipping bill 1749398 - already processed (779/2608) 2025-12-03 11:04:56,506 [INFO] Skipping bill 1738008 - already processed (780/2608) 2025-12-03 11:04:56,507 [INFO] Skipping bill 1735894 - already processed (781/2608) 2025-12-03 11:04:56,507 [INFO] Skipping bill 1841416 - already processed (782/2608) 2025-12-03 11:04:56,507 [INFO] Skipping bill 1736739 - already processed (783/2608) 2025-12-03 11:04:56,507 [INFO] Skipping bill 1737586 - already processed (784/2608) 2025-12-03 11:04:56,507 [INFO] Skipping bill 1884557 - already processed (785/2608) 2025-12-03 11:04:56,507 [INFO] Processing 786/2608: Bill ID 1875094 2025-12-03 11:04:57,542 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:57,543 [ERROR] Failed to generate report for bill 1875094: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281291 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:04:58,554 [INFO] Processing 787/2608: Bill ID 1755026 2025-12-03 11:04:59,334 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:04:59,336 [ERROR] Failed to generate report for bill 1755026: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 211752 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:00,347 [INFO] Processing 788/2608: Bill ID 1871591 2025-12-03 11:05:01,229 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:01,232 [ERROR] Failed to generate report for bill 1871591: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 247438 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:02,242 [INFO] Processing 789/2608: Bill ID 1760451 2025-12-03 11:05:03,176 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:03,178 [ERROR] Failed to generate report for bill 1760451: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 254452 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:04,187 [INFO] Processing 790/2608: Bill ID 1880948 2025-12-03 11:05:05,167 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:05,170 [ERROR] Failed to generate report for bill 1880948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 280764 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:05,217 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:05:05,218 [INFO] Progress: 790/2608 - Processed: 0, Skipped: 753, Errors: 37 2025-12-03 11:05:06,220 [INFO] Processing 791/2608: Bill ID 1775764 2025-12-03 11:05:07,260 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:07,268 [ERROR] Failed to generate report for bill 1775764: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 323686 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:08,272 [INFO] Processing 792/2608: Bill ID 1884634 2025-12-03 11:05:09,422 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:09,424 [ERROR] Failed to generate report for bill 1884634: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 362014 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:10,435 [INFO] Skipping bill 2000828 - already processed (793/2608) 2025-12-03 11:05:10,435 [INFO] Skipping bill 2001551 - already processed (794/2608) 2025-12-03 11:05:10,435 [INFO] Skipping bill 1997130 - already processed (795/2608) 2025-12-03 11:05:10,436 [INFO] Skipping bill 2046647 - already processed (796/2608) 2025-12-03 11:05:10,436 [INFO] Skipping bill 2004206 - already processed (797/2608) 2025-12-03 11:05:10,436 [INFO] Skipping bill 1998184 - already processed (798/2608) 2025-12-03 11:05:10,436 [INFO] Skipping bill 2002506 - already processed (799/2608) 2025-12-03 11:05:10,437 [INFO] Skipping bill 2002695 - already processed (800/2608) 2025-12-03 11:05:10,437 [INFO] Skipping bill 2047070 - already processed (801/2608) 2025-12-03 11:05:10,437 [INFO] Skipping bill 2002923 - already processed (802/2608) 2025-12-03 11:05:10,437 [INFO] Skipping bill 1998946 - already processed (803/2608) 2025-12-03 11:05:10,437 [INFO] Skipping bill 1997259 - already processed (804/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 2001269 - already processed (805/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 2000625 - already processed (806/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 2002705 - already processed (807/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 2046676 - already processed (808/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 2046660 - already processed (809/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 2003933 - already processed (810/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 1997268 - already processed (811/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 2019724 - already processed (812/2608) 2025-12-03 11:05:10,438 [INFO] Skipping bill 1997990 - already processed (813/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1998675 - already processed (814/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 2002243 - already processed (815/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1997584 - already processed (816/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 2002929 - already processed (817/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 2001175 - already processed (818/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1998815 - already processed (819/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1998575 - already processed (820/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1999210 - already processed (821/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 2001320 - already processed (822/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 2053304 - already processed (823/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 2001993 - already processed (824/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1999288 - already processed (825/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1998331 - already processed (826/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 2003746 - already processed (827/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1927181 - already processed (828/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 2030259 - already processed (829/2608) 2025-12-03 11:05:10,439 [INFO] Skipping bill 1997622 - already processed (830/2608) 2025-12-03 11:05:10,439 [INFO] Processing 831/2608: Bill ID 2028594 2025-12-03 11:05:11,572 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:11,574 [ERROR] Failed to generate report for bill 2028594: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 252856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:12,582 [INFO] Processing 832/2608: Bill ID 2038620 2025-12-03 11:05:13,618 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:13,619 [ERROR] Failed to generate report for bill 2038620: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 311445 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:14,630 [INFO] Processing 833/2608: Bill ID 2024637 2025-12-03 11:05:15,467 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:15,470 [ERROR] Failed to generate report for bill 2024637: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 218599 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:16,478 [INFO] Skipping bill 1780182 - already processed (834/2608) 2025-12-03 11:05:16,479 [INFO] Skipping bill 1895692 - already processed (835/2608) 2025-12-03 11:05:16,479 [INFO] Skipping bill 1780190 - already processed (836/2608) 2025-12-03 11:05:16,479 [INFO] Skipping bill 1780196 - already processed (837/2608) 2025-12-03 11:05:16,480 [INFO] Skipping bill 1780166 - already processed (838/2608) 2025-12-03 11:05:16,480 [INFO] Skipping bill 1888099 - already processed (839/2608) 2025-12-03 11:05:16,480 [INFO] Skipping bill 1852983 - already processed (840/2608) 2025-12-03 11:05:16,480 [INFO] Skipping bill 1852813 - already processed (841/2608) 2025-12-03 11:05:16,480 [INFO] Skipping bill 2037995 - already processed (842/2608) 2025-12-03 11:05:16,480 [INFO] Skipping bill 2043787 - already processed (843/2608) 2025-12-03 11:05:16,481 [INFO] Skipping bill 2035241 - already processed (844/2608) 2025-12-03 11:05:16,481 [INFO] Skipping bill 2035278 - already processed (845/2608) 2025-12-03 11:05:16,481 [INFO] Skipping bill 2038014 - already processed (846/2608) 2025-12-03 11:05:16,481 [INFO] Skipping bill 2009885 - already processed (847/2608) 2025-12-03 11:05:16,481 [INFO] Skipping bill 2035768 - already processed (848/2608) 2025-12-03 11:05:16,481 [INFO] Skipping bill 2025453 - already processed (849/2608) 2025-12-03 11:05:16,481 [INFO] Skipping bill 2038856 - already processed (850/2608) 2025-12-03 11:05:16,482 [INFO] Skipping bill 2009892 - already processed (851/2608) 2025-12-03 11:05:16,482 [INFO] Skipping bill 1861260 - already processed (852/2608) 2025-12-03 11:05:16,482 [INFO] Skipping bill 1856334 - already processed (853/2608) 2025-12-03 11:05:16,482 [INFO] Skipping bill 1856821 - already processed (854/2608) 2025-12-03 11:05:16,482 [INFO] Skipping bill 1864646 - already processed (855/2608) 2025-12-03 11:05:16,482 [INFO] Skipping bill 1860647 - already processed (856/2608) 2025-12-03 11:05:16,482 [INFO] Skipping bill 1707979 - already processed (857/2608) 2025-12-03 11:05:16,482 [INFO] Skipping bill 1643078 - already processed (858/2608) 2025-12-03 11:05:16,483 [INFO] Skipping bill 1651590 - already processed (859/2608) 2025-12-03 11:05:16,483 [INFO] Skipping bill 1852405 - already processed (860/2608) 2025-12-03 11:05:16,483 [INFO] Skipping bill 1852812 - already processed (861/2608) 2025-12-03 11:05:16,483 [INFO] Skipping bill 1858711 - already processed (862/2608) 2025-12-03 11:05:16,483 [INFO] Skipping bill 1853103 - already processed (863/2608) 2025-12-03 11:05:16,483 [INFO] Skipping bill 1851979 - already processed (864/2608) 2025-12-03 11:05:16,483 [INFO] Skipping bill 1859186 - already processed (865/2608) 2025-12-03 11:05:16,484 [INFO] Skipping bill 1740589 - already processed (866/2608) 2025-12-03 11:05:16,484 [INFO] Skipping bill 1741802 - already processed (867/2608) 2025-12-03 11:05:16,484 [INFO] Skipping bill 1860410 - already processed (868/2608) 2025-12-03 11:05:16,484 [INFO] Skipping bill 1957720 - already processed (869/2608) 2025-12-03 11:05:16,484 [INFO] Skipping bill 1974786 - already processed (870/2608) 2025-12-03 11:05:16,484 [INFO] Skipping bill 1989670 - already processed (871/2608) 2025-12-03 11:05:16,484 [INFO] Skipping bill 1979597 - already processed (872/2608) 2025-12-03 11:05:16,485 [INFO] Skipping bill 1984757 - already processed (873/2608) 2025-12-03 11:05:16,485 [INFO] Skipping bill 2009204 - already processed (874/2608) 2025-12-03 11:05:16,485 [INFO] Skipping bill 2015254 - already processed (875/2608) 2025-12-03 11:05:16,485 [INFO] Skipping bill 1974962 - already processed (876/2608) 2025-12-03 11:05:16,485 [INFO] Skipping bill 2009276 - already processed (877/2608) 2025-12-03 11:05:16,485 [INFO] Skipping bill 1989103 - already processed (878/2608) 2025-12-03 11:05:16,485 [INFO] Skipping bill 1984950 - already processed (879/2608) 2025-12-03 11:05:16,485 [INFO] Skipping bill 1975975 - already processed (880/2608) 2025-12-03 11:05:16,486 [INFO] Skipping bill 2004610 - already processed (881/2608) 2025-12-03 11:05:16,486 [INFO] Skipping bill 2004938 - already processed (882/2608) 2025-12-03 11:05:16,486 [INFO] Skipping bill 1992603 - already processed (883/2608) 2025-12-03 11:05:16,486 [INFO] Skipping bill 1992640 - already processed (884/2608) 2025-12-03 11:05:16,486 [INFO] Skipping bill 1996293 - already processed (885/2608) 2025-12-03 11:05:16,486 [INFO] Skipping bill 2011831 - already processed (886/2608) 2025-12-03 11:05:16,486 [INFO] Skipping bill 2012661 - already processed (887/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1950967 - already processed (888/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1994787 - already processed (889/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 2011159 - already processed (890/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 2006411 - already processed (891/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 2011256 - already processed (892/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 2004789 - already processed (893/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1981280 - already processed (894/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 2009071 - already processed (895/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1967748 - already processed (896/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1707150 - already processed (897/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1669781 - already processed (898/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1643012 - already processed (899/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1848903 - already processed (900/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1848260 - already processed (901/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1820844 - already processed (902/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1851922 - already processed (903/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1850740 - already processed (904/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1838535 - already processed (905/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1851828 - already processed (906/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1863177 - already processed (907/2608) 2025-12-03 11:05:16,487 [INFO] Skipping bill 1852015 - already processed (908/2608) 2025-12-03 11:05:16,488 [INFO] Skipping bill 1818886 - already processed (909/2608) 2025-12-03 11:05:16,488 [INFO] Skipping bill 1852513 - already processed (910/2608) 2025-12-03 11:05:16,488 [INFO] Processing 911/2608: Bill ID 1851836 2025-12-03 11:05:17,416 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:17,418 [ERROR] Failed to generate report for bill 1851836: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 185865 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:18,423 [INFO] Skipping bill 1933975 - already processed (912/2608) 2025-12-03 11:05:18,423 [INFO] Skipping bill 1935092 - already processed (913/2608) 2025-12-03 11:05:18,423 [INFO] Skipping bill 1937681 - already processed (914/2608) 2025-12-03 11:05:18,424 [INFO] Skipping bill 1927333 - already processed (915/2608) 2025-12-03 11:05:18,424 [INFO] Skipping bill 1936069 - already processed (916/2608) 2025-12-03 11:05:18,424 [INFO] Skipping bill 1940299 - already processed (917/2608) 2025-12-03 11:05:18,424 [INFO] Skipping bill 1911677 - already processed (918/2608) 2025-12-03 11:05:18,424 [INFO] Skipping bill 1929973 - already processed (919/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1910359 - already processed (920/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1934687 - already processed (921/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1930038 - already processed (922/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1925325 - already processed (923/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1933890 - already processed (924/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1934898 - already processed (925/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 2034194 - already processed (926/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1972440 - already processed (927/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1934020 - already processed (928/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1912210 - already processed (929/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1634819 - already processed (930/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1634779 - already processed (931/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1836873 - already processed (932/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1834678 - already processed (933/2608) 2025-12-03 11:05:18,425 [INFO] Skipping bill 1790707 - already processed (934/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1852775 - already processed (935/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1897040 - already processed (936/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1898466 - already processed (937/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1893847 - already processed (938/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1983834 - already processed (939/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1988287 - already processed (940/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1894415 - already processed (941/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1917533 - already processed (942/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1900966 - already processed (943/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1972401 - already processed (944/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1988699 - already processed (945/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1988844 - already processed (946/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1894126 - already processed (947/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1974757 - already processed (948/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1717719 - already processed (949/2608) 2025-12-03 11:05:18,426 [INFO] Skipping bill 1912107 - already processed (950/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1941091 - already processed (951/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1916250 - already processed (952/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1974033 - already processed (953/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1895954 - already processed (954/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1974042 - already processed (955/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1981849 - already processed (956/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1979780 - already processed (957/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1896111 - already processed (958/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1971592 - already processed (959/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1971640 - already processed (960/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1896588 - already processed (961/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1981663 - already processed (962/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1867796 - already processed (963/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1867828 - already processed (964/2608) 2025-12-03 11:05:18,427 [INFO] Skipping bill 1813907 - already processed (965/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1814493 - already processed (966/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1867439 - already processed (967/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1814241 - already processed (968/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1935238 - already processed (969/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1908945 - already processed (970/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1980982 - already processed (971/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1934094 - already processed (972/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1931194 - already processed (973/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1915534 - already processed (974/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1927914 - already processed (975/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1710815 - already processed (976/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1748189 - already processed (977/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1746365 - already processed (978/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1965229 - already processed (979/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1999738 - already processed (980/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1989648 - already processed (981/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1946188 - already processed (982/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1892638 - already processed (983/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1944647 - already processed (984/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1983017 - already processed (985/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1954626 - already processed (986/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1977147 - already processed (987/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 2013424 - already processed (988/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 2013451 - already processed (989/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1953001 - already processed (990/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1982880 - already processed (991/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1989793 - already processed (992/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 1954479 - already processed (993/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 2031601 - already processed (994/2608) 2025-12-03 11:05:18,428 [INFO] Skipping bill 2009433 - already processed (995/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1901514 - already processed (996/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1651925 - already processed (997/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1793373 - already processed (998/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1793039 - already processed (999/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1792971 - already processed (1000/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1793409 - already processed (1001/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1793958 - already processed (1002/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1793284 - already processed (1003/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1938552 - already processed (1004/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1922870 - already processed (1005/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1803710 - already processed (1006/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1889722 - already processed (1007/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1892083 - already processed (1008/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1889346 - already processed (1009/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1889719 - already processed (1010/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1889335 - already processed (1011/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1897572 - already processed (1012/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1887538 - already processed (1013/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1887101 - already processed (1014/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1888624 - already processed (1015/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1877673 - already processed (1016/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1897803 - already processed (1017/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1889758 - already processed (1018/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1897565 - already processed (1019/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1853521 - already processed (1020/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1864839 - already processed (1021/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1879513 - already processed (1022/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1878078 - already processed (1023/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 2013662 - already processed (1024/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1897603 - already processed (1025/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1881186 - already processed (1026/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1983797 - already processed (1027/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 2023789 - already processed (1028/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1878049 - already processed (1029/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 2052496 - already processed (1030/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1807241 - already processed (1031/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1881870 - already processed (1032/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1881843 - already processed (1033/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 2030230 - already processed (1034/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 2022901 - already processed (1035/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1896879 - already processed (1036/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1889701 - already processed (1037/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 1970250 - already processed (1038/2608) 2025-12-03 11:05:18,429 [INFO] Skipping bill 2037153 - already processed (1039/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2013635 - already processed (1040/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1883140 - already processed (1041/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1853367 - already processed (1042/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1801284 - already processed (1043/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1889518 - already processed (1044/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1888073 - already processed (1045/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2052173 - already processed (1046/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2047520 - already processed (1047/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1889754 - already processed (1048/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1835303 - already processed (1049/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1949479 - already processed (1050/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2022816 - already processed (1051/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1872559 - already processed (1052/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1875857 - already processed (1053/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1876467 - already processed (1054/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1876586 - already processed (1055/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2038328 - already processed (1056/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1878887 - already processed (1057/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1853095 - already processed (1058/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1805407 - already processed (1059/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2022907 - already processed (1060/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1949574 - already processed (1061/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1844841 - already processed (1062/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1864295 - already processed (1063/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1881176 - already processed (1064/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1837365 - already processed (1065/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1837180 - already processed (1066/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1887099 - already processed (1067/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2028679 - already processed (1068/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2030354 - already processed (1069/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1882474 - already processed (1070/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1964010 - already processed (1071/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2008967 - already processed (1072/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1881178 - already processed (1073/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 2037324 - already processed (1074/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1806224 - already processed (1075/2608) 2025-12-03 11:05:18,430 [INFO] Skipping bill 1837135 - already processed (1076/2608) 2025-12-03 11:05:18,431 [INFO] Skipping bill 1805930 - already processed (1077/2608) 2025-12-03 11:05:18,431 [INFO] Skipping bill 1803406 - already processed (1078/2608) 2025-12-03 11:05:18,431 [INFO] Skipping bill 1883773 - already processed (1079/2608) 2025-12-03 11:05:18,431 [INFO] Skipping bill 1994137 - already processed (1080/2608) 2025-12-03 11:05:18,431 [INFO] Skipping bill 1881306 - already processed (1081/2608) 2025-12-03 11:05:18,431 [INFO] Skipping bill 1889726 - already processed (1082/2608) 2025-12-03 11:05:18,431 [INFO] Skipping bill 1889593 - already processed (1083/2608) 2025-12-03 11:05:18,431 [INFO] Processing 1084/2608: Bill ID 1883494 2025-12-03 11:05:19,167 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:19,167 [ERROR] Failed to generate report for bill 1883494: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 245791 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:20,173 [INFO] Processing 1085/2608: Bill ID 1883535 2025-12-03 11:05:20,991 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:20,993 [ERROR] Failed to generate report for bill 1883535: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 244625 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:22,003 [INFO] Processing 1086/2608: Bill ID 2038569 2025-12-03 11:05:22,923 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:22,926 [ERROR] Failed to generate report for bill 2038569: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248177 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:23,935 [INFO] Processing 1087/2608: Bill ID 2038571 2025-12-03 11:05:24,679 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:24,682 [ERROR] Failed to generate report for bill 2038571: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 248161 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:25,689 [INFO] Skipping bill 1666814 - already processed (1088/2608) 2025-12-03 11:05:25,689 [INFO] Skipping bill 1722011 - already processed (1089/2608) 2025-12-03 11:05:25,689 [INFO] Skipping bill 1724398 - already processed (1090/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1676083 - already processed (1091/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1824011 - already processed (1092/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1824228 - already processed (1093/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1824028 - already processed (1094/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1834441 - already processed (1095/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1908238 - already processed (1096/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1967640 - already processed (1097/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1935448 - already processed (1098/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1987611 - already processed (1099/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1964156 - already processed (1100/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1947221 - already processed (1101/2608) 2025-12-03 11:05:25,690 [INFO] Skipping bill 1943110 - already processed (1102/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1964415 - already processed (1103/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1996731 - already processed (1104/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1944685 - already processed (1105/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1936020 - already processed (1106/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1947285 - already processed (1107/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1949498 - already processed (1108/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1933085 - already processed (1109/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1881403 - already processed (1110/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1878440 - already processed (1111/2608) 2025-12-03 11:05:25,691 [INFO] Skipping bill 1874641 - already processed (1112/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1780447 - already processed (1113/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1829313 - already processed (1114/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1876168 - already processed (1115/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1878357 - already processed (1116/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1801087 - already processed (1117/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1878533 - already processed (1118/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1781971 - already processed (1119/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1836944 - already processed (1120/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1773855 - already processed (1121/2608) 2025-12-03 11:05:25,692 [INFO] Skipping bill 1774758 - already processed (1122/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1779189 - already processed (1123/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1780403 - already processed (1124/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1882902 - already processed (1125/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1761023 - already processed (1126/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1763282 - already processed (1127/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1756406 - already processed (1128/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1721336 - already processed (1129/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1865663 - already processed (1130/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1884682 - already processed (1131/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1879124 - already processed (1132/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1813023 - already processed (1133/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1780572 - already processed (1134/2608) 2025-12-03 11:05:25,693 [INFO] Skipping bill 1796023 - already processed (1135/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1796213 - already processed (1136/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1841005 - already processed (1137/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1861287 - already processed (1138/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1878752 - already processed (1139/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1813101 - already processed (1140/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1768635 - already processed (1141/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1767924 - already processed (1142/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1641754 - already processed (1143/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1882889 - already processed (1144/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1729291 - already processed (1145/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1773906 - already processed (1146/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1839957 - already processed (1147/2608) 2025-12-03 11:05:25,694 [INFO] Skipping bill 1843965 - already processed (1148/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1879710 - already processed (1149/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1763606 - already processed (1150/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1780432 - already processed (1151/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1812765 - already processed (1152/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1836858 - already processed (1153/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1864293 - already processed (1154/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1770114 - already processed (1155/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1733127 - already processed (1156/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1762026 - already processed (1157/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1829537 - already processed (1158/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1878142 - already processed (1159/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1880765 - already processed (1160/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1762041 - already processed (1161/2608) 2025-12-03 11:05:25,695 [INFO] Skipping bill 1646230 - already processed (1162/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1762213 - already processed (1163/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1779393 - already processed (1164/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1878544 - already processed (1165/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1780459 - already processed (1166/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1781963 - already processed (1167/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1758293 - already processed (1168/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1768495 - already processed (1169/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1773860 - already processed (1170/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1864226 - already processed (1171/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1878400 - already processed (1172/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1879652 - already processed (1173/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1865798 - already processed (1174/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1862795 - already processed (1175/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1710243 - already processed (1176/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1818495 - already processed (1177/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1775864 - already processed (1178/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1856196 - already processed (1179/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1791835 - already processed (1180/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1658709 - already processed (1181/2608) 2025-12-03 11:05:25,696 [INFO] Skipping bill 1695187 - already processed (1182/2608) 2025-12-03 11:05:25,696 [INFO] Processing 1183/2608: Bill ID 1818780 2025-12-03 11:05:26,210 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:26,210 [ERROR] Failed to generate report for bill 1818780: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137401 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:27,220 [INFO] Processing 1184/2608: Bill ID 1818766 2025-12-03 11:05:27,853 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:27,854 [ERROR] Failed to generate report for bill 1818766: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 137403 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:28,862 [INFO] Skipping bill 1752559 - already processed (1185/2608) 2025-12-03 11:05:28,862 [INFO] Skipping bill 1882942 - already processed (1186/2608) 2025-12-03 11:05:28,862 [INFO] Skipping bill 1766908 - already processed (1187/2608) 2025-12-03 11:05:28,862 [INFO] Skipping bill 1691064 - already processed (1188/2608) 2025-12-03 11:05:28,863 [INFO] Processing 1189/2608: Bill ID 1690030 2025-12-03 11:05:30,720 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:30,723 [ERROR] Failed to generate report for bill 1690030: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566694 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:31,732 [INFO] Processing 1190/2608: Bill ID 1690727 2025-12-03 11:05:33,382 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:33,386 [ERROR] Failed to generate report for bill 1690727: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 566696 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:33,437 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:05:33,437 [INFO] Progress: 1190/2608 - Processed: 0, Skipped: 1139, Errors: 51 2025-12-03 11:05:34,438 [INFO] Processing 1191/2608: Bill ID 1875409 2025-12-03 11:05:38,196 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:38,199 [ERROR] Failed to generate report for bill 1875409: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351641 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:39,204 [INFO] Processing 1192/2608: Bill ID 1835820 2025-12-03 11:05:42,830 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:42,832 [ERROR] Failed to generate report for bill 1835820: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1351620 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:43,836 [INFO] Processing 1193/2608: Bill ID 1818459 2025-12-03 11:05:46,490 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:46,491 [ERROR] Failed to generate report for bill 1818459: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1029309 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:47,496 [INFO] Skipping bill 2009915 - already processed (1194/2608) 2025-12-03 11:05:47,496 [INFO] Skipping bill 1917775 - already processed (1195/2608) 2025-12-03 11:05:47,496 [INFO] Skipping bill 1902981 - already processed (1196/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 1908626 - already processed (1197/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 1903647 - already processed (1198/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 1993863 - already processed (1199/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 2015656 - already processed (1200/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 1909120 - already processed (1201/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 2032707 - already processed (1202/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 2030838 - already processed (1203/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 2033110 - already processed (1204/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 1992712 - already processed (1205/2608) 2025-12-03 11:05:47,497 [INFO] Skipping bill 2010112 - already processed (1206/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 2035218 - already processed (1207/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 1970759 - already processed (1208/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 1917262 - already processed (1209/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 2015645 - already processed (1210/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 1941920 - already processed (1211/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 2041695 - already processed (1212/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 2038940 - already processed (1213/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 2043998 - already processed (1214/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 1903496 - already processed (1215/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 1942114 - already processed (1216/2608) 2025-12-03 11:05:47,498 [INFO] Skipping bill 1948978 - already processed (1217/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 2025948 - already processed (1218/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 2030449 - already processed (1219/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 2012463 - already processed (1220/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 2036382 - already processed (1221/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 1901571 - already processed (1222/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 1902589 - already processed (1223/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 2045075 - already processed (1224/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 2042397 - already processed (1225/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 2005892 - already processed (1226/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 1995988 - already processed (1227/2608) 2025-12-03 11:05:47,499 [INFO] Skipping bill 1941987 - already processed (1228/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 2051432 - already processed (1229/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 2030765 - already processed (1230/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 1900450 - already processed (1231/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 2032658 - already processed (1232/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 1934862 - already processed (1233/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 1954914 - already processed (1234/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 1908970 - already processed (1235/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 2046810 - already processed (1236/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 1911503 - already processed (1237/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 1917449 - already processed (1238/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 2012421 - already processed (1239/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 2036409 - already processed (1240/2608) 2025-12-03 11:05:47,500 [INFO] Skipping bill 1930912 - already processed (1241/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2015571 - already processed (1242/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1991849 - already processed (1243/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1909237 - already processed (1244/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1907396 - already processed (1245/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2032681 - already processed (1246/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2031449 - already processed (1247/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2036417 - already processed (1248/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2010242 - already processed (1249/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1902485 - already processed (1250/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2044029 - already processed (1251/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2039479 - already processed (1252/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1993679 - already processed (1253/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1927014 - already processed (1254/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2053531 - already processed (1255/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2012390 - already processed (1256/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2051443 - already processed (1257/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1967476 - already processed (1258/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2039584 - already processed (1259/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1941925 - already processed (1260/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2039602 - already processed (1261/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2021091 - already processed (1262/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2053730 - already processed (1263/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1993748 - already processed (1264/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1907408 - already processed (1265/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2043429 - already processed (1266/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2036445 - already processed (1267/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1948575 - already processed (1268/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2020539 - already processed (1269/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1941981 - already processed (1270/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1985057 - already processed (1271/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 2012554 - already processed (1272/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1900469 - already processed (1273/2608) 2025-12-03 11:05:47,501 [INFO] Skipping bill 1949091 - already processed (1274/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1903302 - already processed (1275/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 2031820 - already processed (1276/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1986509 - already processed (1277/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1992147 - already processed (1278/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1908565 - already processed (1279/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 2018195 - already processed (1280/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1948655 - already processed (1281/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1926957 - already processed (1282/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 2007650 - already processed (1283/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1938062 - already processed (1284/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1909167 - already processed (1285/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1910683 - already processed (1286/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1918276 - already processed (1287/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1942634 - already processed (1288/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1947885 - already processed (1289/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 2034828 - already processed (1290/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 2035534 - already processed (1291/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1937370 - already processed (1292/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 2036328 - already processed (1293/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1940048 - already processed (1294/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1990212 - already processed (1295/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1995017 - already processed (1296/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1937257 - already processed (1297/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1900853 - already processed (1298/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1947971 - already processed (1299/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1920984 - already processed (1300/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1902725 - already processed (1301/2608) 2025-12-03 11:05:47,502 [INFO] Skipping bill 1964016 - already processed (1302/2608) 2025-12-03 11:05:47,502 [INFO] Processing 1303/2608: Bill ID 1934576 2025-12-03 11:05:47,963 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:47,964 [ERROR] Failed to generate report for bill 1934576: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132147 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:48,972 [INFO] Skipping bill 1898800 - already processed (1304/2608) 2025-12-03 11:05:48,973 [INFO] Skipping bill 1971511 - already processed (1305/2608) 2025-12-03 11:05:48,973 [INFO] Processing 1306/2608: Bill ID 1935197 2025-12-03 11:05:49,458 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:49,459 [ERROR] Failed to generate report for bill 1935197: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142845 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:50,468 [INFO] Processing 1307/2608: Bill ID 1935040 2025-12-03 11:05:51,098 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:51,099 [ERROR] Failed to generate report for bill 1935040: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 142844 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:52,109 [INFO] Skipping bill 1948521 - already processed (1308/2608) 2025-12-03 11:05:52,109 [INFO] Skipping bill 1977652 - already processed (1309/2608) 2025-12-03 11:05:52,109 [INFO] Processing 1310/2608: Bill ID 1934805 2025-12-03 11:05:52,632 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:52,633 [ERROR] Failed to generate report for bill 1934805: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 132143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:52,678 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:05:52,679 [INFO] Progress: 1310/2608 - Processed: 0, Skipped: 1252, Errors: 58 2025-12-03 11:05:53,687 [INFO] Skipping bill 1934970 - already processed (1311/2608) 2025-12-03 11:05:53,689 [INFO] Skipping bill 1934701 - already processed (1312/2608) 2025-12-03 11:05:53,689 [INFO] Skipping bill 1942260 - already processed (1313/2608) 2025-12-03 11:05:53,689 [INFO] Skipping bill 1917391 - already processed (1314/2608) 2025-12-03 11:05:53,689 [INFO] Processing 1315/2608: Bill ID 1935190 2025-12-03 11:05:56,515 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:56,516 [ERROR] Failed to generate report for bill 1935190: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143342 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:05:57,527 [INFO] Processing 1316/2608: Bill ID 1934636 2025-12-03 11:05:59,187 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:05:59,189 [ERROR] Failed to generate report for bill 1934636: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:00,196 [INFO] Processing 1317/2608: Bill ID 1935223 2025-12-03 11:06:01,831 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:01,837 [ERROR] Failed to generate report for bill 1935223: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 671570 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:02,846 [INFO] Processing 1318/2608: Bill ID 1934824 2025-12-03 11:06:06,346 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:06,348 [ERROR] Failed to generate report for bill 1934824: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1143344 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:07,357 [INFO] Processing 1319/2608: Bill ID 2052596 2025-12-03 11:06:11,064 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:11,065 [ERROR] Failed to generate report for bill 2052596: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1446920 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:12,074 [INFO] Skipping bill 1879932 - already processed (1320/2608) 2025-12-03 11:06:12,075 [INFO] Skipping bill 1875738 - already processed (1321/2608) 2025-12-03 11:06:12,075 [INFO] Skipping bill 1875815 - already processed (1322/2608) 2025-12-03 11:06:12,075 [INFO] Skipping bill 1701253 - already processed (1323/2608) 2025-12-03 11:06:12,075 [INFO] Skipping bill 1875615 - already processed (1324/2608) 2025-12-03 11:06:12,075 [INFO] Skipping bill 1754315 - already processed (1325/2608) 2025-12-03 11:06:12,076 [INFO] Skipping bill 1751005 - already processed (1326/2608) 2025-12-03 11:06:12,076 [INFO] Skipping bill 1875642 - already processed (1327/2608) 2025-12-03 11:06:12,076 [INFO] Skipping bill 1753811 - already processed (1328/2608) 2025-12-03 11:06:12,076 [INFO] Skipping bill 1752050 - already processed (1329/2608) 2025-12-03 11:06:12,076 [INFO] Skipping bill 1704591 - already processed (1330/2608) 2025-12-03 11:06:12,076 [INFO] Skipping bill 1748551 - already processed (1331/2608) 2025-12-03 11:06:12,076 [INFO] Skipping bill 1725321 - already processed (1332/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 1725195 - already processed (1333/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2014434 - already processed (1334/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2014277 - already processed (1335/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2000124 - already processed (1336/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2022736 - already processed (1337/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2022881 - already processed (1338/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2014322 - already processed (1339/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2014068 - already processed (1340/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2005730 - already processed (1341/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2014594 - already processed (1342/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2013131 - already processed (1343/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2022220 - already processed (1344/2608) 2025-12-03 11:06:12,077 [INFO] Skipping bill 2008986 - already processed (1345/2608) 2025-12-03 11:06:12,078 [INFO] Skipping bill 2013796 - already processed (1346/2608) 2025-12-03 11:06:12,078 [INFO] Skipping bill 2014312 - already processed (1347/2608) 2025-12-03 11:06:12,078 [INFO] Skipping bill 2013903 - already processed (1348/2608) 2025-12-03 11:06:12,078 [INFO] Skipping bill 2013936 - already processed (1349/2608) 2025-12-03 11:06:12,078 [INFO] Skipping bill 2013868 - already processed (1350/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2014024 - already processed (1351/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2014377 - already processed (1352/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2017695 - already processed (1353/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2018632 - already processed (1354/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2022666 - already processed (1355/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2022828 - already processed (1356/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2015551 - already processed (1357/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2009244 - already processed (1358/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 1969116 - already processed (1359/2608) 2025-12-03 11:06:12,079 [INFO] Skipping bill 2009761 - already processed (1360/2608) 2025-12-03 11:06:12,079 [INFO] Processing 1361/2608: Bill ID 2012916 2025-12-03 11:06:12,498 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:12,499 [ERROR] Failed to generate report for bill 2012916: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 131894 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:13,508 [INFO] Skipping bill 1996111 - already processed (1362/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1656324 - already processed (1363/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1640560 - already processed (1364/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1644790 - already processed (1365/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1908973 - already processed (1366/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1930471 - already processed (1367/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1916131 - already processed (1368/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1916897 - already processed (1369/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1930219 - already processed (1370/2608) 2025-12-03 11:06:13,509 [INFO] Skipping bill 1916725 - already processed (1371/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1916697 - already processed (1372/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1921549 - already processed (1373/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1916032 - already processed (1374/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1915939 - already processed (1375/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1899315 - already processed (1376/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1930747 - already processed (1377/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1898936 - already processed (1378/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1828241 - already processed (1379/2608) 2025-12-03 11:06:13,510 [INFO] Skipping bill 1784887 - already processed (1380/2608) 2025-12-03 11:06:13,510 [INFO] Processing 1381/2608: Bill ID 1710984 2025-12-03 11:06:18,746 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:18,748 [ERROR] Failed to generate report for bill 1710984: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 2157293 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:19,754 [INFO] Processing 1382/2608: Bill ID 1710996 2025-12-03 11:06:22,619 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:22,620 [ERROR] Failed to generate report for bill 1710996: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053567 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:23,632 [INFO] Processing 1383/2608: Bill ID 1659671 2025-12-03 11:06:26,630 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:26,633 [ERROR] Failed to generate report for bill 1659671: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1053812 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:27,644 [INFO] Skipping bill 2046561 - already processed (1384/2608) 2025-12-03 11:06:27,644 [INFO] Skipping bill 2018937 - already processed (1385/2608) 2025-12-03 11:06:27,644 [INFO] Skipping bill 2046538 - already processed (1386/2608) 2025-12-03 11:06:27,644 [INFO] Skipping bill 2038933 - already processed (1387/2608) 2025-12-03 11:06:27,644 [INFO] Skipping bill 2019064 - already processed (1388/2608) 2025-12-03 11:06:27,645 [INFO] Skipping bill 2051853 - already processed (1389/2608) 2025-12-03 11:06:27,645 [INFO] Skipping bill 1973495 - already processed (1390/2608) 2025-12-03 11:06:27,645 [INFO] Skipping bill 2044900 - already processed (1391/2608) 2025-12-03 11:06:27,645 [INFO] Skipping bill 2036911 - already processed (1392/2608) 2025-12-03 11:06:27,645 [INFO] Skipping bill 1956347 - already processed (1393/2608) 2025-12-03 11:06:27,645 [INFO] Skipping bill 2015680 - already processed (1394/2608) 2025-12-03 11:06:27,646 [INFO] Skipping bill 2035837 - already processed (1395/2608) 2025-12-03 11:06:27,646 [INFO] Skipping bill 2052361 - already processed (1396/2608) 2025-12-03 11:06:27,646 [INFO] Skipping bill 2053186 - already processed (1397/2608) 2025-12-03 11:06:27,646 [INFO] Skipping bill 1956501 - already processed (1398/2608) 2025-12-03 11:06:27,646 [INFO] Processing 1399/2608: Bill ID 1966320 2025-12-03 11:06:33,083 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:33,087 [ERROR] Failed to generate report for bill 1966320: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1949605 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:34,095 [INFO] Processing 1400/2608: Bill ID 2044413 2025-12-03 11:06:35,027 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:35,029 [ERROR] Failed to generate report for bill 2044413: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281182 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 281182 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:35,082 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:06:35,082 [INFO] Progress: 1400/2608 - Processed: 0, Skipped: 1331, Errors: 69 2025-12-03 11:06:36,087 [INFO] Processing 1401/2608: Bill ID 2031116 2025-12-03 11:06:37,000 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:37,004 [ERROR] Failed to generate report for bill 2031116: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 344621 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:38,012 [INFO] Skipping bill 1820171 - already processed (1402/2608) 2025-12-03 11:06:38,013 [INFO] Skipping bill 1820684 - already processed (1403/2608) 2025-12-03 11:06:38,013 [INFO] Skipping bill 1820075 - already processed (1404/2608) 2025-12-03 11:06:38,013 [INFO] Skipping bill 1820478 - already processed (1405/2608) 2025-12-03 11:06:38,013 [INFO] Skipping bill 1820697 - already processed (1406/2608) 2025-12-03 11:06:38,013 [INFO] Skipping bill 1821348 - already processed (1407/2608) 2025-12-03 11:06:38,014 [INFO] Skipping bill 1819421 - already processed (1408/2608) 2025-12-03 11:06:38,014 [INFO] Skipping bill 1820795 - already processed (1409/2608) 2025-12-03 11:06:38,014 [INFO] Skipping bill 1814318 - already processed (1410/2608) 2025-12-03 11:06:38,014 [INFO] Skipping bill 1814441 - already processed (1411/2608) 2025-12-03 11:06:38,014 [INFO] Skipping bill 1791289 - already processed (1412/2608) 2025-12-03 11:06:38,014 [INFO] Skipping bill 1789468 - already processed (1413/2608) 2025-12-03 11:06:38,014 [INFO] Skipping bill 1924199 - already processed (1414/2608) 2025-12-03 11:06:38,015 [INFO] Skipping bill 1920208 - already processed (1415/2608) 2025-12-03 11:06:38,015 [INFO] Skipping bill 1920320 - already processed (1416/2608) 2025-12-03 11:06:38,015 [INFO] Skipping bill 1923586 - already processed (1417/2608) 2025-12-03 11:06:38,015 [INFO] Skipping bill 1918327 - already processed (1418/2608) 2025-12-03 11:06:38,015 [INFO] Skipping bill 1922702 - already processed (1419/2608) 2025-12-03 11:06:38,015 [INFO] Skipping bill 1923122 - already processed (1420/2608) 2025-12-03 11:06:38,015 [INFO] Skipping bill 1924269 - already processed (1421/2608) 2025-12-03 11:06:38,015 [INFO] Skipping bill 1925220 - already processed (1422/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1924640 - already processed (1423/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1924912 - already processed (1424/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1900252 - already processed (1425/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 2018241 - already processed (1426/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1920876 - already processed (1427/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1920720 - already processed (1428/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1925546 - already processed (1429/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1903378 - already processed (1430/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1921990 - already processed (1431/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1922805 - already processed (1432/2608) 2025-12-03 11:06:38,016 [INFO] Skipping bill 1922842 - already processed (1433/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1836006 - already processed (1434/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1836109 - already processed (1435/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1843504 - already processed (1436/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1973003 - already processed (1437/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 2009609 - already processed (1438/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1986214 - already processed (1439/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1912749 - already processed (1440/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1914095 - already processed (1441/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1914598 - already processed (1442/2608) 2025-12-03 11:06:38,017 [INFO] Skipping bill 1913104 - already processed (1443/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1914569 - already processed (1444/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1930373 - already processed (1445/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1982090 - already processed (1446/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1914274 - already processed (1447/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1982120 - already processed (1448/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1773806 - already processed (1449/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1880673 - already processed (1450/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1724997 - already processed (1451/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1775230 - already processed (1452/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1889846 - already processed (1453/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1773451 - already processed (1454/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1759469 - already processed (1455/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1777407 - already processed (1456/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1880554 - already processed (1457/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1854268 - already processed (1458/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1771135 - already processed (1459/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1830478 - already processed (1460/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1780085 - already processed (1461/2608) 2025-12-03 11:06:38,018 [INFO] Skipping bill 1858003 - already processed (1462/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1880735 - already processed (1463/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1882950 - already processed (1464/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1878925 - already processed (1465/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1878252 - already processed (1466/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1884263 - already processed (1467/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1873862 - already processed (1468/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1882265 - already processed (1469/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1771247 - already processed (1470/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1836612 - already processed (1471/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1820748 - already processed (1472/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1886418 - already processed (1473/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1769931 - already processed (1474/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1740020 - already processed (1475/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1878961 - already processed (1476/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 1768592 - already processed (1477/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 2045757 - already processed (1478/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 2030536 - already processed (1479/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 2047301 - already processed (1480/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 2039357 - already processed (1481/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 2034685 - already processed (1482/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 2037642 - already processed (1483/2608) 2025-12-03 11:06:38,019 [INFO] Skipping bill 2022168 - already processed (1484/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2052644 - already processed (1485/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2051282 - already processed (1486/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1937863 - already processed (1487/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2043639 - already processed (1488/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2012593 - already processed (1489/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1991206 - already processed (1490/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1947924 - already processed (1491/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2012408 - already processed (1492/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2021116 - already processed (1493/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1973751 - already processed (1494/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2045246 - already processed (1495/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1910852 - already processed (1496/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1956391 - already processed (1497/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2023404 - already processed (1498/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2035307 - already processed (1499/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1944456 - already processed (1500/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2041064 - already processed (1501/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2039278 - already processed (1502/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2041823 - already processed (1503/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1946034 - already processed (1504/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 2038442 - already processed (1505/2608) 2025-12-03 11:06:38,020 [INFO] Skipping bill 1905925 - already processed (1506/2608) 2025-12-03 11:06:38,020 [INFO] Processing 1507/2608: Bill ID 2041076 2025-12-03 11:06:38,508 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:38,509 [ERROR] Failed to generate report for bill 2041076: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136745 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:39,517 [INFO] Processing 1508/2608: Bill ID 2037948 2025-12-03 11:06:40,046 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:40,048 [ERROR] Failed to generate report for bill 2037948: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136856 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:41,057 [INFO] Skipping bill 1757100 - already processed (1509/2608) 2025-12-03 11:06:41,057 [INFO] Skipping bill 1766918 - already processed (1510/2608) 2025-12-03 11:06:41,058 [INFO] Skipping bill 1691606 - already processed (1511/2608) 2025-12-03 11:06:41,058 [INFO] Skipping bill 1757087 - already processed (1512/2608) 2025-12-03 11:06:41,058 [INFO] Skipping bill 1691984 - already processed (1513/2608) 2025-12-03 11:06:41,058 [INFO] Skipping bill 1724146 - already processed (1514/2608) 2025-12-03 11:06:41,058 [INFO] Skipping bill 1811367 - already processed (1515/2608) 2025-12-03 11:06:41,058 [INFO] Skipping bill 1864559 - already processed (1516/2608) 2025-12-03 11:06:41,059 [INFO] Skipping bill 1833383 - already processed (1517/2608) 2025-12-03 11:06:41,059 [INFO] Skipping bill 1839979 - already processed (1518/2608) 2025-12-03 11:06:41,059 [INFO] Skipping bill 1863636 - already processed (1519/2608) 2025-12-03 11:06:41,059 [INFO] Skipping bill 1866932 - already processed (1520/2608) 2025-12-03 11:06:41,059 [INFO] Skipping bill 1829566 - already processed (1521/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1858179 - already processed (1522/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1857154 - already processed (1523/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1866872 - already processed (1524/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1844272 - already processed (1525/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1875576 - already processed (1526/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1875933 - already processed (1527/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1844730 - already processed (1528/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1858971 - already processed (1529/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1870027 - already processed (1530/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1994761 - already processed (1531/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1935080 - already processed (1532/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1945535 - already processed (1533/2608) 2025-12-03 11:06:41,060 [INFO] Skipping bill 1979504 - already processed (1534/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1937835 - already processed (1535/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1918971 - already processed (1536/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1986390 - already processed (1537/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1945988 - already processed (1538/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1940828 - already processed (1539/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1986602 - already processed (1540/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1988979 - already processed (1541/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 2008057 - already processed (1542/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1986556 - already processed (1543/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1986569 - already processed (1544/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1988788 - already processed (1545/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 2028551 - already processed (1546/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1937524 - already processed (1547/2608) 2025-12-03 11:06:41,061 [INFO] Skipping bill 1966994 - already processed (1548/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 2030023 - already processed (1549/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1988713 - already processed (1550/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1988914 - already processed (1551/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 2030055 - already processed (1552/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1666116 - already processed (1553/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1792231 - already processed (1554/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1802681 - already processed (1555/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1921522 - already processed (1556/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1999928 - already processed (1557/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 2022730 - already processed (1558/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 2024009 - already processed (1559/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1895318 - already processed (1560/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1944028 - already processed (1561/2608) 2025-12-03 11:06:41,062 [INFO] Skipping bill 1954350 - already processed (1562/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1954733 - already processed (1563/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 2029172 - already processed (1564/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1944096 - already processed (1565/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1895182 - already processed (1566/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1919972 - already processed (1567/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1895637 - already processed (1568/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1819620 - already processed (1569/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1811138 - already processed (1570/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1948251 - already processed (1571/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1901594 - already processed (1572/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1833554 - already processed (1573/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1833050 - already processed (1574/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1830912 - already processed (1575/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1834207 - already processed (1576/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1795187 - already processed (1577/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1828458 - already processed (1578/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1808304 - already processed (1579/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1834240 - already processed (1580/2608) 2025-12-03 11:06:41,063 [INFO] Skipping bill 1831671 - already processed (1581/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1832378 - already processed (1582/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1828742 - already processed (1583/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1833429 - already processed (1584/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1828784 - already processed (1585/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1825620 - already processed (1586/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1799785 - already processed (1587/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1832466 - already processed (1588/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1831669 - already processed (1589/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1832147 - already processed (1590/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1831971 - already processed (1591/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1832437 - already processed (1592/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1828244 - already processed (1593/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1833731 - already processed (1594/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1833264 - already processed (1595/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1833393 - already processed (1596/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1825869 - already processed (1597/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1825916 - already processed (1598/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1873399 - already processed (1599/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1826595 - already processed (1600/2608) 2025-12-03 11:06:41,064 [INFO] Skipping bill 1832185 - already processed (1601/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1832434 - already processed (1602/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1831535 - already processed (1603/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1834179 - already processed (1604/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1834106 - already processed (1605/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1946381 - already processed (1606/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1953992 - already processed (1607/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1948149 - already processed (1608/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1959470 - already processed (1609/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1946783 - already processed (1610/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1955110 - already processed (1611/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1959302 - already processed (1612/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1959458 - already processed (1613/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1960722 - already processed (1614/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1951003 - already processed (1615/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1954702 - already processed (1616/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1954311 - already processed (1617/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1959312 - already processed (1618/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1959377 - already processed (1619/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1954015 - already processed (1620/2608) 2025-12-03 11:06:41,065 [INFO] Skipping bill 1954357 - already processed (1621/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1944274 - already processed (1622/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1944487 - already processed (1623/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1959723 - already processed (1624/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1960832 - already processed (1625/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1971015 - already processed (1626/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1971366 - already processed (1627/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1733375 - already processed (1628/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1700527 - already processed (1629/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1719413 - already processed (1630/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1694457 - already processed (1631/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1744060 - already processed (1632/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1727826 - already processed (1633/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1743424 - already processed (1634/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1732248 - already processed (1635/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1731629 - already processed (1636/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1769317 - already processed (1637/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1747471 - already processed (1638/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1747557 - already processed (1639/2608) 2025-12-03 11:06:41,066 [INFO] Skipping bill 1710763 - already processed (1640/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1782999 - already processed (1641/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1781207 - already processed (1642/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1726065 - already processed (1643/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1898826 - already processed (1644/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1992725 - already processed (1645/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1988473 - already processed (1646/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1970030 - already processed (1647/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 2007109 - already processed (1648/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1891805 - already processed (1649/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1949957 - already processed (1650/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1990181 - already processed (1651/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1991711 - already processed (1652/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1897779 - already processed (1653/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 2006851 - already processed (1654/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1975361 - already processed (1655/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1987235 - already processed (1656/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 2007736 - already processed (1657/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 2000200 - already processed (1658/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1923991 - already processed (1659/2608) 2025-12-03 11:06:41,067 [INFO] Skipping bill 1892858 - already processed (1660/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2000248 - already processed (1661/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1971072 - already processed (1662/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2008077 - already processed (1663/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1907668 - already processed (1664/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1962916 - already processed (1665/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2005286 - already processed (1666/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2005181 - already processed (1667/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1891063 - already processed (1668/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1900186 - already processed (1669/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1994657 - already processed (1670/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2008307 - already processed (1671/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1991260 - already processed (1672/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2006384 - already processed (1673/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2002051 - already processed (1674/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1973236 - already processed (1675/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2007316 - already processed (1676/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1890894 - already processed (1677/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2000178 - already processed (1678/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1982970 - already processed (1679/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2006497 - already processed (1680/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1890775 - already processed (1681/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1892224 - already processed (1682/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 1954141 - already processed (1683/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2006579 - already processed (1684/2608) 2025-12-03 11:06:41,068 [INFO] Skipping bill 2006128 - already processed (1685/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2024097 - already processed (1686/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2034878 - already processed (1687/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1891396 - already processed (1688/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2040103 - already processed (1689/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2041986 - already processed (1690/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1987712 - already processed (1691/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2005998 - already processed (1692/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2008318 - already processed (1693/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1892843 - already processed (1694/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1946392 - already processed (1695/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1971169 - already processed (1696/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1890786 - already processed (1697/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1891256 - already processed (1698/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1942882 - already processed (1699/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2031981 - already processed (1700/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2033602 - already processed (1701/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2034279 - already processed (1702/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1974704 - already processed (1703/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1950849 - already processed (1704/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1975022 - already processed (1705/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1981850 - already processed (1706/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 1890492 - already processed (1707/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2020803 - already processed (1708/2608) 2025-12-03 11:06:41,069 [INFO] Skipping bill 2005343 - already processed (1709/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1890466 - already processed (1710/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1975612 - already processed (1711/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1994176 - already processed (1712/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1990550 - already processed (1713/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1891411 - already processed (1714/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1983542 - already processed (1715/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1999872 - already processed (1716/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 2007449 - already processed (1717/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 2039972 - already processed (1718/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1892428 - already processed (1719/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1891501 - already processed (1720/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 2007840 - already processed (1721/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1976041 - already processed (1722/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1992763 - already processed (1723/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1993770 - already processed (1724/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 2007872 - already processed (1725/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1936766 - already processed (1726/2608) 2025-12-03 11:06:41,070 [INFO] Skipping bill 1676049 - already processed (1727/2608) 2025-12-03 11:06:41,070 [INFO] Processing 1728/2608: Bill ID 1704512 2025-12-03 11:06:41,586 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:41,588 [ERROR] Failed to generate report for bill 1704512: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 178116 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:42,597 [INFO] Skipping bill 1828750 - already processed (1729/2608) 2025-12-03 11:06:42,598 [INFO] Skipping bill 1823594 - already processed (1730/2608) 2025-12-03 11:06:42,598 [INFO] Skipping bill 1820331 - already processed (1731/2608) 2025-12-03 11:06:42,598 [INFO] Skipping bill 1810219 - already processed (1732/2608) 2025-12-03 11:06:42,598 [INFO] Skipping bill 1813477 - already processed (1733/2608) 2025-12-03 11:06:42,598 [INFO] Skipping bill 1858814 - already processed (1734/2608) 2025-12-03 11:06:42,598 [INFO] Skipping bill 1882805 - already processed (1735/2608) 2025-12-03 11:06:42,598 [INFO] Skipping bill 1811586 - already processed (1736/2608) 2025-12-03 11:06:42,598 [INFO] Skipping bill 1794392 - already processed (1737/2608) 2025-12-03 11:06:42,598 [INFO] Processing 1738/2608: Bill ID 1844899 2025-12-03 11:06:43,217 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:43,219 [ERROR] Failed to generate report for bill 1844899: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 150202 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:44,227 [INFO] Skipping bill 1954171 - already processed (1739/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1911041 - already processed (1740/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1963098 - already processed (1741/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1943827 - already processed (1742/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1968353 - already processed (1743/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1981617 - already processed (1744/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1995499 - already processed (1745/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1954569 - already processed (1746/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1950395 - already processed (1747/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1989323 - already processed (1748/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1904576 - already processed (1749/2608) 2025-12-03 11:06:44,228 [INFO] Skipping bill 1968434 - already processed (1750/2608) 2025-12-03 11:06:44,229 [INFO] Processing 1751/2608: Bill ID 2046115 2025-12-03 11:06:45,265 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:45,267 [ERROR] Failed to generate report for bill 2046115: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 321718 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:46,276 [INFO] Skipping bill 1912099 - already processed (1752/2608) 2025-12-03 11:06:46,277 [INFO] Skipping bill 1946923 - already processed (1753/2608) 2025-12-03 11:06:46,277 [INFO] Processing 1754/2608: Bill ID 2046119 2025-12-03 11:06:47,011 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:47,013 [ERROR] Failed to generate report for bill 2046119: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 259421 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:48,021 [INFO] Processing 1755/2608: Bill ID 1897901 2025-12-03 11:06:49,470 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:49,471 [ERROR] Failed to generate report for bill 1897901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 499565 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:50,477 [INFO] Processing 1756/2608: Bill ID 1948482 2025-12-03 11:06:51,377 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:51,380 [ERROR] Failed to generate report for bill 1948482: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 283315 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:52,386 [INFO] Skipping bill 1800317 - already processed (1757/2608) 2025-12-03 11:06:52,387 [INFO] Skipping bill 1800156 - already processed (1758/2608) 2025-12-03 11:06:52,387 [INFO] Skipping bill 1854552 - already processed (1759/2608) 2025-12-03 11:06:52,387 [INFO] Skipping bill 1680053 - already processed (1760/2608) 2025-12-03 11:06:52,387 [INFO] Skipping bill 1682772 - already processed (1761/2608) 2025-12-03 11:06:52,387 [INFO] Skipping bill 1737434 - already processed (1762/2608) 2025-12-03 11:06:52,388 [INFO] Skipping bill 1981655 - already processed (1763/2608) 2025-12-03 11:06:52,388 [INFO] Skipping bill 1982851 - already processed (1764/2608) 2025-12-03 11:06:52,388 [INFO] Skipping bill 1934587 - already processed (1765/2608) 2025-12-03 11:06:52,388 [INFO] Skipping bill 1981303 - already processed (1766/2608) 2025-12-03 11:06:52,388 [INFO] Skipping bill 1983676 - already processed (1767/2608) 2025-12-03 11:06:52,388 [INFO] Skipping bill 1969845 - already processed (1768/2608) 2025-12-03 11:06:52,389 [INFO] Skipping bill 1983355 - already processed (1769/2608) 2025-12-03 11:06:52,389 [INFO] Skipping bill 2009795 - already processed (1770/2608) 2025-12-03 11:06:52,389 [INFO] Skipping bill 1973485 - already processed (1771/2608) 2025-12-03 11:06:52,389 [INFO] Skipping bill 1967494 - already processed (1772/2608) 2025-12-03 11:06:52,389 [INFO] Skipping bill 1973283 - already processed (1773/2608) 2025-12-03 11:06:52,389 [INFO] Skipping bill 1639846 - already processed (1774/2608) 2025-12-03 11:06:52,389 [INFO] Skipping bill 1646426 - already processed (1775/2608) 2025-12-03 11:06:52,389 [INFO] Skipping bill 1673591 - already processed (1776/2608) 2025-12-03 11:06:52,390 [INFO] Skipping bill 1639749 - already processed (1777/2608) 2025-12-03 11:06:52,390 [INFO] Skipping bill 1655379 - already processed (1778/2608) 2025-12-03 11:06:52,390 [INFO] Skipping bill 1630766 - already processed (1779/2608) 2025-12-03 11:06:52,390 [INFO] Skipping bill 1630878 - already processed (1780/2608) 2025-12-03 11:06:52,392 [INFO] Skipping bill 1630898 - already processed (1781/2608) 2025-12-03 11:06:52,392 [INFO] Skipping bill 1645265 - already processed (1782/2608) 2025-12-03 11:06:52,392 [INFO] Skipping bill 1650459 - already processed (1783/2608) 2025-12-03 11:06:52,392 [INFO] Skipping bill 1645172 - already processed (1784/2608) 2025-12-03 11:06:52,392 [INFO] Skipping bill 1630804 - already processed (1785/2608) 2025-12-03 11:06:52,392 [INFO] Skipping bill 1630761 - already processed (1786/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1652712 - already processed (1787/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1633968 - already processed (1788/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1644865 - already processed (1789/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1645061 - already processed (1790/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1809843 - already processed (1791/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1811981 - already processed (1792/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1812040 - already processed (1793/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1798563 - already processed (1794/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1807894 - already processed (1795/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1798580 - already processed (1796/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1800951 - already processed (1797/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1808295 - already processed (1798/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1799462 - already processed (1799/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1808024 - already processed (1800/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1807991 - already processed (1801/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1812376 - already processed (1802/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1822475 - already processed (1803/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1811644 - already processed (1804/2608) 2025-12-03 11:06:52,393 [INFO] Skipping bill 1794980 - already processed (1805/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1808264 - already processed (1806/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1801793 - already processed (1807/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1799221 - already processed (1808/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1822208 - already processed (1809/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1800673 - already processed (1810/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1809026 - already processed (1811/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1812182 - already processed (1812/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1886330 - already processed (1813/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1904645 - already processed (1814/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1911036 - already processed (1815/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1904674 - already processed (1816/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1901323 - already processed (1817/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1904347 - already processed (1818/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1925485 - already processed (1819/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1886222 - already processed (1820/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1905613 - already processed (1821/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1912330 - already processed (1822/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1914968 - already processed (1823/2608) 2025-12-03 11:06:52,394 [INFO] Skipping bill 1925408 - already processed (1824/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1886065 - already processed (1825/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905445 - already processed (1826/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905965 - already processed (1827/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1886188 - already processed (1828/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905894 - already processed (1829/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1912145 - already processed (1830/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1927784 - already processed (1831/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1941702 - already processed (1832/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1929947 - already processed (1833/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905942 - already processed (1834/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1912012 - already processed (1835/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905698 - already processed (1836/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1886051 - already processed (1837/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1932239 - already processed (1838/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1932502 - already processed (1839/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1885937 - already processed (1840/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1900803 - already processed (1841/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905712 - already processed (1842/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905995 - already processed (1843/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1902641 - already processed (1844/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905891 - already processed (1845/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905860 - already processed (1846/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1908254 - already processed (1847/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905920 - already processed (1848/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1886241 - already processed (1849/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1886007 - already processed (1850/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1896347 - already processed (1851/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1905982 - already processed (1852/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1898426 - already processed (1853/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1791614 - already processed (1854/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1792210 - already processed (1855/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1825997 - already processed (1856/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1792205 - already processed (1857/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1801141 - already processed (1858/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1796759 - already processed (1859/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1794124 - already processed (1860/2608) 2025-12-03 11:06:52,395 [INFO] Skipping bill 1680711 - already processed (1861/2608) 2025-12-03 11:06:52,396 [INFO] Skipping bill 1686234 - already processed (1862/2608) 2025-12-03 11:06:52,396 [INFO] Skipping bill 1813390 - already processed (1863/2608) 2025-12-03 11:06:52,396 [INFO] Skipping bill 1797745 - already processed (1864/2608) 2025-12-03 11:06:52,396 [INFO] Skipping bill 1810331 - already processed (1865/2608) 2025-12-03 11:06:52,396 [INFO] Skipping bill 1813358 - already processed (1866/2608) 2025-12-03 11:06:52,396 [INFO] Skipping bill 1657734 - already processed (1867/2608) 2025-12-03 11:06:52,396 [INFO] Processing 1868/2608: Bill ID 1644054 2025-12-03 11:06:53,496 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:53,498 [ERROR] Failed to generate report for bill 1644054: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410788 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:54,507 [INFO] Processing 1869/2608: Bill ID 1645282 2025-12-03 11:06:55,812 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:55,813 [ERROR] Failed to generate report for bill 1645282: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 410770 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:56,820 [INFO] Processing 1870/2608: Bill ID 1644063 2025-12-03 11:06:57,451 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:57,452 [ERROR] Failed to generate report for bill 1644063: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224071 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:06:57,496 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:06:57,497 [INFO] Progress: 1870/2608 - Processed: 0, Skipped: 1789, Errors: 81 2025-12-03 11:06:58,502 [INFO] Processing 1871/2608: Bill ID 1645384 2025-12-03 11:06:59,192 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:06:59,194 [ERROR] Failed to generate report for bill 1645384: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 224065 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:00,203 [INFO] Processing 1872/2608: Bill ID 1645468 2025-12-03 11:07:00,934 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:00,935 [ERROR] Failed to generate report for bill 1645468: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242533 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:01,944 [INFO] Processing 1873/2608: Bill ID 1796787 2025-12-03 11:07:03,293 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:03,296 [ERROR] Failed to generate report for bill 1796787: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436514 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:04,304 [INFO] Processing 1874/2608: Bill ID 1643905 2025-12-03 11:07:05,030 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:05,032 [ERROR] Failed to generate report for bill 1643905: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 242552 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:06,042 [INFO] Processing 1875/2608: Bill ID 1796722 2025-12-03 11:07:07,236 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:07,238 [ERROR] Failed to generate report for bill 1796722: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 436532 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:08,244 [INFO] Skipping bill 1952329 - already processed (1876/2608) 2025-12-03 11:07:08,245 [INFO] Skipping bill 1964254 - already processed (1877/2608) 2025-12-03 11:07:08,245 [INFO] Skipping bill 1904212 - already processed (1878/2608) 2025-12-03 11:07:08,245 [INFO] Skipping bill 1903879 - already processed (1879/2608) 2025-12-03 11:07:08,245 [INFO] Skipping bill 1930459 - already processed (1880/2608) 2025-12-03 11:07:08,245 [INFO] Skipping bill 1938736 - already processed (1881/2608) 2025-12-03 11:07:08,246 [INFO] Skipping bill 1941657 - already processed (1882/2608) 2025-12-03 11:07:08,246 [INFO] Skipping bill 1932498 - already processed (1883/2608) 2025-12-03 11:07:08,246 [INFO] Skipping bill 1898840 - already processed (1884/2608) 2025-12-03 11:07:08,246 [INFO] Skipping bill 1903962 - already processed (1885/2608) 2025-12-03 11:07:08,246 [INFO] Skipping bill 1943677 - already processed (1886/2608) 2025-12-03 11:07:08,246 [INFO] Skipping bill 1911202 - already processed (1887/2608) 2025-12-03 11:07:08,246 [INFO] Skipping bill 1898343 - already processed (1888/2608) 2025-12-03 11:07:08,246 [INFO] Skipping bill 1930701 - already processed (1889/2608) 2025-12-03 11:07:08,247 [INFO] Skipping bill 1911699 - already processed (1890/2608) 2025-12-03 11:07:08,247 [INFO] Skipping bill 1985707 - already processed (1891/2608) 2025-12-03 11:07:08,247 [INFO] Skipping bill 2025140 - already processed (1892/2608) 2025-12-03 11:07:08,247 [INFO] Processing 1893/2608: Bill ID 1916784 2025-12-03 11:07:08,921 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:08,923 [ERROR] Failed to generate report for bill 1916784: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 217357 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:09,931 [INFO] Processing 1894/2608: Bill ID 1908012 2025-12-03 11:07:11,276 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:11,278 [ERROR] Failed to generate report for bill 1908012: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458968 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:12,287 [INFO] Processing 1895/2608: Bill ID 1907961 2025-12-03 11:07:13,529 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:13,531 [ERROR] Failed to generate report for bill 1907961: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 458948 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:14,541 [INFO] Processing 1896/2608: Bill ID 1907826 2025-12-03 11:07:15,372 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:15,374 [ERROR] Failed to generate report for bill 1907826: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284007 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:16,382 [INFO] Processing 1897/2608: Bill ID 2023840 2025-12-03 11:07:18,342 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:18,344 [ERROR] Failed to generate report for bill 2023840: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 709732 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:19,354 [INFO] Processing 1898/2608: Bill ID 1907778 2025-12-03 11:07:20,287 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:20,290 [ERROR] Failed to generate report for bill 1907778: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 284021 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:21,300 [INFO] Skipping bill 1691917 - already processed (1899/2608) 2025-12-03 11:07:21,300 [INFO] Skipping bill 1695960 - already processed (1900/2608) 2025-12-03 11:07:21,300 [INFO] Skipping bill 1850601 - already processed (1901/2608) 2025-12-03 11:07:21,301 [INFO] Skipping bill 1838098 - already processed (1902/2608) 2025-12-03 11:07:21,301 [INFO] Skipping bill 1842521 - already processed (1903/2608) 2025-12-03 11:07:21,301 [INFO] Skipping bill 1809518 - already processed (1904/2608) 2025-12-03 11:07:21,301 [INFO] Skipping bill 1839623 - already processed (1905/2608) 2025-12-03 11:07:21,301 [INFO] Skipping bill 1836854 - already processed (1906/2608) 2025-12-03 11:07:21,301 [INFO] Skipping bill 1828203 - already processed (1907/2608) 2025-12-03 11:07:21,301 [INFO] Skipping bill 1823415 - already processed (1908/2608) 2025-12-03 11:07:21,302 [INFO] Processing 1909/2608: Bill ID 1809702 2025-12-03 11:07:22,336 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:22,340 [ERROR] Failed to generate report for bill 1809702: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:23,351 [INFO] Processing 1910/2608: Bill ID 1812739 2025-12-03 11:07:24,383 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:24,385 [ERROR] Failed to generate report for bill 1812739: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 287482 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:24,435 [INFO] Saved 2605 reports to data/bill_reports.json 2025-12-03 11:07:24,435 [INFO] Progress: 1910/2608 - Processed: 0, Skipped: 1816, Errors: 94 2025-12-03 11:07:25,442 [INFO] Skipping bill 1993190 - already processed (1911/2608) 2025-12-03 11:07:25,442 [INFO] Skipping bill 2009723 - already processed (1912/2608) 2025-12-03 11:07:25,442 [INFO] Skipping bill 1970932 - already processed (1913/2608) 2025-12-03 11:07:25,442 [INFO] Skipping bill 1990795 - already processed (1914/2608) 2025-12-03 11:07:25,442 [INFO] Skipping bill 1966877 - already processed (1915/2608) 2025-12-03 11:07:25,442 [INFO] Skipping bill 1972008 - already processed (1916/2608) 2025-12-03 11:07:25,442 [INFO] Skipping bill 1994548 - already processed (1917/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1991745 - already processed (1918/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 2010818 - already processed (1919/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 2003316 - already processed (1920/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 2021830 - already processed (1921/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 2009667 - already processed (1922/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 2011559 - already processed (1923/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1981081 - already processed (1924/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1990559 - already processed (1925/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1968858 - already processed (1926/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1841344 - already processed (1927/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1837111 - already processed (1928/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1783445 - already processed (1929/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1854251 - already processed (1930/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1867071 - already processed (1931/2608) 2025-12-03 11:07:25,443 [INFO] Skipping bill 1782940 - already processed (1932/2608) 2025-12-03 11:07:25,444 [INFO] Skipping bill 1780646 - already processed (1933/2608) 2025-12-03 11:07:25,444 [INFO] Skipping bill 1781005 - already processed (1934/2608) 2025-12-03 11:07:25,444 [INFO] Processing 1935/2608: Bill ID 1709614 2025-12-03 11:07:28,070 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:28,072 [ERROR] Failed to generate report for bill 1709614: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 980737 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:29,077 [INFO] Processing 1936/2608: Bill ID 1709655 2025-12-03 11:07:31,663 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:31,665 [ERROR] Failed to generate report for bill 1709655: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 982574 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:32,674 [INFO] Skipping bill 2034598 - already processed (1937/2608) 2025-12-03 11:07:32,675 [INFO] Skipping bill 2034722 - already processed (1938/2608) 2025-12-03 11:07:32,675 [INFO] Skipping bill 2038518 - already processed (1939/2608) 2025-12-03 11:07:32,675 [INFO] Skipping bill 2039752 - already processed (1940/2608) 2025-12-03 11:07:32,675 [INFO] Skipping bill 2044087 - already processed (1941/2608) 2025-12-03 11:07:32,675 [INFO] Skipping bill 2042614 - already processed (1942/2608) 2025-12-03 11:07:32,676 [INFO] Skipping bill 2045155 - already processed (1943/2608) 2025-12-03 11:07:32,676 [INFO] Skipping bill 2045662 - already processed (1944/2608) 2025-12-03 11:07:32,676 [INFO] Processing 1945/2608: Bill ID 1974122 2025-12-03 11:07:35,442 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:35,443 [ERROR] Failed to generate report for bill 1974122: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009931 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:36,453 [INFO] Processing 1946/2608: Bill ID 1974279 2025-12-03 11:07:38,816 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:07:38,819 [ERROR] Failed to generate report for bill 1974279: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 1009921 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:07:39,829 [INFO] Processing 1947/2608: Bill ID 2055109 2025-12-03 11:08:15,774 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-03 11:08:15,791 [INFO] Skipping bill 2047792 - already processed (1948/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1842729 - already processed (1949/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1842887 - already processed (1950/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1939111 - already processed (1951/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1895001 - already processed (1952/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1945993 - already processed (1953/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1945813 - already processed (1954/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1774433 - already processed (1955/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1884990 - already processed (1956/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1882572 - already processed (1957/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1784131 - already processed (1958/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1873726 - already processed (1959/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1882205 - already processed (1960/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1860116 - already processed (1961/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1835790 - already processed (1962/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1835624 - already processed (1963/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1876647 - already processed (1964/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1887447 - already processed (1965/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1898165 - already processed (1966/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1780760 - already processed (1967/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1887744 - already processed (1968/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1782128 - already processed (1969/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1887739 - already processed (1970/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1885322 - already processed (1971/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1887646 - already processed (1972/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1897119 - already processed (1973/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1782539 - already processed (1974/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1880117 - already processed (1975/2608) 2025-12-03 11:08:15,792 [INFO] Skipping bill 1810734 - already processed (1976/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1887671 - already processed (1977/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1883053 - already processed (1978/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1861062 - already processed (1979/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1775461 - already processed (1980/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1792331 - already processed (1981/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1765384 - already processed (1982/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1863023 - already processed (1983/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1883034 - already processed (1984/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1886748 - already processed (1985/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1886756 - already processed (1986/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1885278 - already processed (1987/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1784087 - already processed (1988/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1886439 - already processed (1989/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1877586 - already processed (1990/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1888775 - already processed (1991/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1773844 - already processed (1992/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1857956 - already processed (1993/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1775721 - already processed (1994/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1861016 - already processed (1995/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1884504 - already processed (1996/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1892975 - already processed (1997/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1886714 - already processed (1998/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1877214 - already processed (1999/2608) 2025-12-03 11:08:15,793 [INFO] Skipping bill 1779520 - already processed (2000/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1882161 - already processed (2001/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1793734 - already processed (2002/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1885501 - already processed (2003/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1887169 - already processed (2004/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1877680 - already processed (2005/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1887282 - already processed (2006/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1774766 - already processed (2007/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1774961 - already processed (2008/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1866654 - already processed (2009/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1779127 - already processed (2010/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1882224 - already processed (2011/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1892198 - already processed (2012/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1759862 - already processed (2013/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1888377 - already processed (2014/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1894701 - already processed (2015/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1864751 - already processed (2016/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1772453 - already processed (2017/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1885309 - already processed (2018/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1886447 - already processed (2019/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1848736 - already processed (2020/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1884301 - already processed (2021/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1881976 - already processed (2022/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1885426 - already processed (2023/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1775334 - already processed (2024/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1884442 - already processed (2025/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1881980 - already processed (2026/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1893238 - already processed (2027/2608) 2025-12-03 11:08:15,794 [INFO] Skipping bill 1865594 - already processed (2028/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1872732 - already processed (2029/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1885341 - already processed (2030/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1764018 - already processed (2031/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1887315 - already processed (2032/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1751404 - already processed (2033/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1888249 - already processed (2034/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1885249 - already processed (2035/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1881398 - already processed (2036/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1866637 - already processed (2037/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1770194 - already processed (2038/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1775580 - already processed (2039/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1784705 - already processed (2040/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1831382 - already processed (2041/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1885274 - already processed (2042/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1892393 - already processed (2043/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1877691 - already processed (2044/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1776083 - already processed (2045/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1760978 - already processed (2046/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1764682 - already processed (2047/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1880344 - already processed (2048/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1886698 - already processed (2049/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1876488 - already processed (2050/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1765330 - already processed (2051/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1887359 - already processed (2052/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1771744 - already processed (2053/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1831359 - already processed (2054/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1774102 - already processed (2055/2608) 2025-12-03 11:08:15,795 [INFO] Skipping bill 1774479 - already processed (2056/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1794846 - already processed (2057/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1894867 - already processed (2058/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1774859 - already processed (2059/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1884522 - already processed (2060/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1866979 - already processed (2061/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1886705 - already processed (2062/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1898170 - already processed (2063/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1885330 - already processed (2064/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1792286 - already processed (2065/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1892877 - already processed (2066/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1884177 - already processed (2067/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1774713 - already processed (2068/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1774626 - already processed (2069/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1884513 - already processed (2070/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1887362 - already processed (2071/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1893236 - already processed (2072/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1883668 - already processed (2073/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1831371 - already processed (2074/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1885671 - already processed (2075/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1885535 - already processed (2076/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1888766 - already processed (2077/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1892506 - already processed (2078/2608) 2025-12-03 11:08:15,796 [INFO] Skipping bill 1892532 - already processed (2079/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1878820 - already processed (2080/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1884926 - already processed (2081/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1895881 - already processed (2082/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1778284 - already processed (2083/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1770920 - already processed (2084/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1650801 - already processed (2085/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1883378 - already processed (2086/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1683970 - already processed (2087/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1772792 - already processed (2088/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1759623 - already processed (2089/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1760525 - already processed (2090/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1862531 - already processed (2091/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1767461 - already processed (2092/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1776485 - already processed (2093/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1871231 - already processed (2094/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1887711 - already processed (2095/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1893243 - already processed (2096/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1701254 - already processed (2097/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1897456 - already processed (2098/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1775615 - already processed (2099/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1794843 - already processed (2100/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1810720 - already processed (2101/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1894308 - already processed (2102/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1894683 - already processed (2103/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1842456 - already processed (2104/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1885281 - already processed (2105/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1759897 - already processed (2106/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1860079 - already processed (2107/2608) 2025-12-03 11:08:15,797 [INFO] Skipping bill 1746098 - already processed (2108/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1897489 - already processed (2109/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1887287 - already processed (2110/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1885252 - already processed (2111/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1892936 - already processed (2112/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1732925 - already processed (2113/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1746069 - already processed (2114/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1774408 - already processed (2115/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1772182 - already processed (2116/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1884422 - already processed (2117/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1687118 - already processed (2118/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1784726 - already processed (2119/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1762912 - already processed (2120/2608) 2025-12-03 11:08:15,798 [INFO] Skipping bill 1898405 - already processed (2121/2608) 2025-12-03 11:08:15,798 [INFO] Processing 2122/2608: Bill ID 1884189 2025-12-03 11:08:17,303 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:17,305 [ERROR] Failed to generate report for bill 1884189: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553725 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:18,317 [INFO] Skipping bill 1899847 - already processed (2123/2608) 2025-12-03 11:08:18,317 [INFO] Skipping bill 1732984 - already processed (2124/2608) 2025-12-03 11:08:18,317 [INFO] Skipping bill 1746089 - already processed (2125/2608) 2025-12-03 11:08:18,318 [INFO] Skipping bill 1766726 - already processed (2126/2608) 2025-12-03 11:08:18,318 [INFO] Skipping bill 1769804 - already processed (2127/2608) 2025-12-03 11:08:18,318 [INFO] Skipping bill 1897097 - already processed (2128/2608) 2025-12-03 11:08:18,318 [INFO] Processing 2129/2608: Bill ID 1774177 2025-12-03 11:08:19,718 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:19,720 [ERROR] Failed to generate report for bill 1774177: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 563143 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:20,726 [INFO] Skipping bill 1757049 - already processed (2130/2608) 2025-12-03 11:08:20,727 [INFO] Skipping bill 1784298 - already processed (2131/2608) 2025-12-03 11:08:20,727 [INFO] Skipping bill 1785108 - already processed (2132/2608) 2025-12-03 11:08:20,727 [INFO] Skipping bill 1772128 - already processed (2133/2608) 2025-12-03 11:08:20,728 [INFO] Skipping bill 1879910 - already processed (2134/2608) 2025-12-03 11:08:20,728 [INFO] Skipping bill 1777717 - already processed (2135/2608) 2025-12-03 11:08:20,728 [INFO] Skipping bill 1843401 - already processed (2136/2608) 2025-12-03 11:08:20,728 [INFO] Skipping bill 1774203 - already processed (2137/2608) 2025-12-03 11:08:20,728 [INFO] Skipping bill 1892268 - already processed (2138/2608) 2025-12-03 11:08:20,728 [INFO] Skipping bill 1774216 - already processed (2139/2608) 2025-12-03 11:08:20,728 [INFO] Skipping bill 1868870 - already processed (2140/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1770792 - already processed (2141/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1894823 - already processed (2142/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1885629 - already processed (2143/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1866980 - already processed (2144/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1826236 - already processed (2145/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1860115 - already processed (2146/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1767424 - already processed (2147/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1877069 - already processed (2148/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1865576 - already processed (2149/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1771076 - already processed (2150/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1755580 - already processed (2151/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1885029 - already processed (2152/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1770955 - already processed (2153/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1772617 - already processed (2154/2608) 2025-12-03 11:08:20,729 [INFO] Skipping bill 1760193 - already processed (2155/2608) 2025-12-03 11:08:20,730 [INFO] Skipping bill 1871212 - already processed (2156/2608) 2025-12-03 11:08:20,730 [INFO] Skipping bill 1887934 - already processed (2157/2608) 2025-12-03 11:08:20,730 [INFO] Skipping bill 1879177 - already processed (2158/2608) 2025-12-03 11:08:20,730 [INFO] Skipping bill 1897536 - already processed (2159/2608) 2025-12-03 11:08:20,730 [INFO] Skipping bill 1854133 - already processed (2160/2608) 2025-12-03 11:08:20,730 [INFO] Skipping bill 1761508 - already processed (2161/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1777284 - already processed (2162/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1774079 - already processed (2163/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1896271 - already processed (2164/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1897312 - already processed (2165/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1774750 - already processed (2166/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1873661 - already processed (2167/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1782516 - already processed (2168/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1782446 - already processed (2169/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1866649 - already processed (2170/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1866664 - already processed (2171/2608) 2025-12-03 11:08:20,731 [INFO] Skipping bill 1707867 - already processed (2172/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1872167 - already processed (2173/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1759875 - already processed (2174/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1789214 - already processed (2175/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1872153 - already processed (2176/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1760229 - already processed (2177/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1774942 - already processed (2178/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1694059 - already processed (2179/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1829219 - already processed (2180/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1679271 - already processed (2181/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1883365 - already processed (2182/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1780777 - already processed (2183/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1707919 - already processed (2184/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1860113 - already processed (2185/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1781933 - already processed (2186/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1751388 - already processed (2187/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1754500 - already processed (2188/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1772123 - already processed (2189/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1892924 - already processed (2190/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1778422 - already processed (2191/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1897294 - already processed (2192/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1769557 - already processed (2193/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1747003 - already processed (2194/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1775420 - already processed (2195/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1885460 - already processed (2196/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1778494 - already processed (2197/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1778507 - already processed (2198/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1746072 - already processed (2199/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1747808 - already processed (2200/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1764055 - already processed (2201/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1765960 - already processed (2202/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1766587 - already processed (2203/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1766736 - already processed (2204/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1771518 - already processed (2205/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1772577 - already processed (2206/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1772933 - already processed (2207/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1773303 - already processed (2208/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1775354 - already processed (2209/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1777649 - already processed (2210/2608) 2025-12-03 11:08:20,732 [INFO] Skipping bill 1783786 - already processed (2211/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1783927 - already processed (2212/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1791735 - already processed (2213/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1791984 - already processed (2214/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1860914 - already processed (2215/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1874964 - already processed (2216/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1876702 - already processed (2217/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1878298 - already processed (2218/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1878970 - already processed (2219/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1878883 - already processed (2220/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1880262 - already processed (2221/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1880301 - already processed (2222/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1880312 - already processed (2223/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1882770 - already processed (2224/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1889897 - already processed (2225/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1892711 - already processed (2226/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1897258 - already processed (2227/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1881528 - already processed (2228/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1782893 - already processed (2229/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1834554 - already processed (2230/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1774082 - already processed (2231/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1783631 - already processed (2232/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1879351 - already processed (2233/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1707921 - already processed (2234/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1872751 - already processed (2235/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1848738 - already processed (2236/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1882577 - already processed (2237/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1880072 - already processed (2238/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1880345 - already processed (2239/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1892804 - already processed (2240/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1860940 - already processed (2241/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1766003 - already processed (2242/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1775441 - already processed (2243/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1758619 - already processed (2244/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1894461 - already processed (2245/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1778171 - already processed (2246/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1778004 - already processed (2247/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1832839 - already processed (2248/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1774844 - already processed (2249/2608) 2025-12-03 11:08:20,733 [INFO] Skipping bill 1751449 - already processed (2250/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1751346 - already processed (2251/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1759080 - already processed (2252/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1882756 - already processed (2253/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1882766 - already processed (2254/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1887196 - already processed (2255/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1889949 - already processed (2256/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1887718 - already processed (2257/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1896232 - already processed (2258/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1783562 - already processed (2259/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1681772 - already processed (2260/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1871711 - already processed (2261/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1874986 - already processed (2262/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1772204 - already processed (2263/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1884912 - already processed (2264/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1888175 - already processed (2265/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1832721 - already processed (2266/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1887649 - already processed (2267/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1887704 - already processed (2268/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1881672 - already processed (2269/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1777454 - already processed (2270/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1882397 - already processed (2271/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1766671 - already processed (2272/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1775036 - already processed (2273/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1694305 - already processed (2274/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1863407 - already processed (2275/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1746051 - already processed (2276/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1882537 - already processed (2277/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1873551 - already processed (2278/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1762960 - already processed (2279/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1887303 - already processed (2280/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1887118 - already processed (2281/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1775679 - already processed (2282/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1882373 - already processed (2283/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1862520 - already processed (2284/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1886817 - already processed (2285/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1750558 - already processed (2286/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1750336 - already processed (2287/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1694173 - already processed (2288/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1864746 - already processed (2289/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1887915 - already processed (2290/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1774093 - already processed (2291/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1650659 - already processed (2292/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1694050 - already processed (2293/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1771092 - already processed (2294/2608) 2025-12-03 11:08:20,734 [INFO] Skipping bill 1876599 - already processed (2295/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1835788 - already processed (2296/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1782691 - already processed (2297/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1876668 - already processed (2298/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1729737 - already processed (2299/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1766627 - already processed (2300/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1885388 - already processed (2301/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1887130 - already processed (2302/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1775597 - already processed (2303/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1793999 - already processed (2304/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1789198 - already processed (2305/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1888330 - already processed (2306/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1882746 - already processed (2307/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1694182 - already processed (2308/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1860920 - already processed (2309/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1774448 - already processed (2310/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1774405 - already processed (2311/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1876990 - already processed (2312/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1876679 - already processed (2313/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1881973 - already processed (2314/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1717622 - already processed (2315/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1885510 - already processed (2316/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1871269 - already processed (2317/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1774266 - already processed (2318/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1785924 - already processed (2319/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1779428 - already processed (2320/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1775195 - already processed (2321/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1775134 - already processed (2322/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1743524 - already processed (2323/2608) 2025-12-03 11:08:20,735 [INFO] Skipping bill 1757473 - already processed (2324/2608) 2025-12-03 11:08:20,735 [INFO] Processing 2325/2608: Bill ID 1857970 2025-12-03 11:08:21,419 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:21,420 [ERROR] Failed to generate report for bill 1857970: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 267230 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:22,431 [INFO] Skipping bill 1883678 - already processed (2326/2608) 2025-12-03 11:08:22,432 [INFO] Processing 2327/2608: Bill ID 1897245 2025-12-03 11:08:28,794 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:28,796 [ERROR] Failed to generate report for bill 1897245: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 614802 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:29,806 [INFO] Skipping bill 1894517 - already processed (2328/2608) 2025-12-03 11:08:29,806 [INFO] Processing 2329/2608: Bill ID 1898241 2025-12-03 11:08:30,840 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:30,841 [ERROR] Failed to generate report for bill 1898241: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 355244 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:31,851 [INFO] Processing 2330/2608: Bill ID 1879854 2025-12-03 11:08:32,888 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:32,889 [ERROR] Failed to generate report for bill 1879854: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 380288 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:32,939 [INFO] Saved 2606 reports to data/bill_reports.json 2025-12-03 11:08:32,940 [INFO] Progress: 2330/2608 - Processed: 1, Skipped: 2225, Errors: 104 2025-12-03 11:08:33,943 [INFO] Skipping bill 1888278 - already processed (2331/2608) 2025-12-03 11:08:33,943 [INFO] Skipping bill 1879169 - already processed (2332/2608) 2025-12-03 11:08:33,943 [INFO] Skipping bill 1860989 - already processed (2333/2608) 2025-12-03 11:08:33,943 [INFO] Skipping bill 1758024 - already processed (2334/2608) 2025-12-03 11:08:33,943 [INFO] Skipping bill 1863932 - already processed (2335/2608) 2025-12-03 11:08:33,943 [INFO] Processing 2336/2608: Bill ID 1771174 2025-12-03 11:08:34,808 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:34,809 [ERROR] Failed to generate report for bill 1771174: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 305590 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:35,818 [INFO] Skipping bill 1772600 - already processed (2337/2608) 2025-12-03 11:08:35,819 [INFO] Skipping bill 1760911 - already processed (2338/2608) 2025-12-03 11:08:35,819 [INFO] Skipping bill 1789291 - already processed (2339/2608) 2025-12-03 11:08:35,819 [INFO] Skipping bill 1764694 - already processed (2340/2608) 2025-12-03 11:08:35,819 [INFO] Skipping bill 1764770 - already processed (2341/2608) 2025-12-03 11:08:35,819 [INFO] Skipping bill 1884949 - already processed (2342/2608) 2025-12-03 11:08:35,819 [INFO] Processing 2343/2608: Bill ID 1897528 2025-12-03 11:08:36,373 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:36,374 [ERROR] Failed to generate report for bill 1897528: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 136190 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:37,382 [INFO] Processing 2344/2608: Bill ID 1898192 2025-12-03 11:08:37,857 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:37,859 [ERROR] Failed to generate report for bill 1898192: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134736 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:38,868 [INFO] Skipping bill 1774988 - already processed (2345/2608) 2025-12-03 11:08:38,868 [INFO] Processing 2346/2608: Bill ID 1892419 2025-12-03 11:08:40,466 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:40,467 [ERROR] Failed to generate report for bill 1892419: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 553296 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:41,475 [INFO] Processing 2347/2608: Bill ID 1884946 2025-12-03 11:08:43,026 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:43,028 [ERROR] Failed to generate report for bill 1884946: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 691025 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:44,039 [INFO] Processing 2348/2608: Bill ID 1885067 2025-12-03 11:08:45,687 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:45,688 [ERROR] Failed to generate report for bill 1885067: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 693396 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:46,698 [INFO] Skipping bill 1879669 - already processed (2349/2608) 2025-12-03 11:08:46,698 [INFO] Processing 2350/2608: Bill ID 1897089 2025-12-03 11:08:47,325 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:47,326 [ERROR] Failed to generate report for bill 1897089: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 228560 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:47,371 [INFO] Saved 2606 reports to data/bill_reports.json 2025-12-03 11:08:47,371 [INFO] Progress: 2350/2608 - Processed: 1, Skipped: 2238, Errors: 111 2025-12-03 11:08:48,376 [INFO] Skipping bill 2041135 - already processed (2351/2608) 2025-12-03 11:08:48,377 [INFO] Skipping bill 2037217 - already processed (2352/2608) 2025-12-03 11:08:48,377 [INFO] Skipping bill 2022578 - already processed (2353/2608) 2025-12-03 11:08:48,377 [INFO] Skipping bill 2045360 - already processed (2354/2608) 2025-12-03 11:08:48,377 [INFO] Skipping bill 2044380 - already processed (2355/2608) 2025-12-03 11:08:48,377 [INFO] Skipping bill 1987991 - already processed (2356/2608) 2025-12-03 11:08:48,377 [INFO] Skipping bill 2040591 - already processed (2357/2608) 2025-12-03 11:08:48,378 [INFO] Skipping bill 2044133 - already processed (2358/2608) 2025-12-03 11:08:48,379 [INFO] Skipping bill 2040128 - already processed (2359/2608) 2025-12-03 11:08:48,379 [INFO] Skipping bill 2022459 - already processed (2360/2608) 2025-12-03 11:08:48,379 [INFO] Skipping bill 2046890 - already processed (2361/2608) 2025-12-03 11:08:48,379 [INFO] Skipping bill 1948171 - already processed (2362/2608) 2025-12-03 11:08:48,379 [INFO] Skipping bill 2047758 - already processed (2363/2608) 2025-12-03 11:08:48,379 [INFO] Skipping bill 2029224 - already processed (2364/2608) 2025-12-03 11:08:48,379 [INFO] Skipping bill 2044676 - already processed (2365/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2041169 - already processed (2366/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2043072 - already processed (2367/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2015628 - already processed (2368/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2029917 - already processed (2369/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2029601 - already processed (2370/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 1988067 - already processed (2371/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 1964814 - already processed (2372/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2043727 - already processed (2373/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 1988016 - already processed (2374/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2037684 - already processed (2375/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2029576 - already processed (2376/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2008640 - already processed (2377/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2042761 - already processed (2378/2608) 2025-12-03 11:08:48,380 [INFO] Skipping bill 2043628 - already processed (2379/2608) 2025-12-03 11:08:48,381 [INFO] Skipping bill 2039925 - already processed (2380/2608) 2025-12-03 11:08:48,381 [INFO] Skipping bill 1990438 - already processed (2381/2608) 2025-12-03 11:08:48,381 [INFO] Skipping bill 2014950 - already processed (2382/2608) 2025-12-03 11:08:48,381 [INFO] Skipping bill 2046871 - already processed (2383/2608) 2025-12-03 11:08:48,381 [INFO] Skipping bill 2008541 - already processed (2384/2608) 2025-12-03 11:08:48,381 [INFO] Skipping bill 2019807 - already processed (2385/2608) 2025-12-03 11:08:48,381 [INFO] Skipping bill 2032195 - already processed (2386/2608) 2025-12-03 11:08:48,381 [INFO] Skipping bill 2032174 - already processed (2387/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2053144 - already processed (2388/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2045181 - already processed (2389/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2035367 - already processed (2390/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2022504 - already processed (2391/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2051717 - already processed (2392/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2040216 - already processed (2393/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2038243 - already processed (2394/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2038240 - already processed (2395/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 1958579 - already processed (2396/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2041151 - already processed (2397/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2040068 - already processed (2398/2608) 2025-12-03 11:08:48,382 [INFO] Skipping bill 2051901 - already processed (2399/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2035878 - already processed (2400/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2043698 - already processed (2401/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2043764 - already processed (2402/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2047702 - already processed (2403/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2034541 - already processed (2404/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2036108 - already processed (2405/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2052002 - already processed (2406/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2036914 - already processed (2407/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2032053 - already processed (2408/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2032068 - already processed (2409/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2045357 - already processed (2410/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2043047 - already processed (2411/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2040306 - already processed (2412/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 1916986 - already processed (2413/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2039821 - already processed (2414/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2047752 - already processed (2415/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2046891 - already processed (2416/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2040880 - already processed (2417/2608) 2025-12-03 11:08:48,383 [INFO] Skipping bill 2040851 - already processed (2418/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2043722 - already processed (2419/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 1987950 - already processed (2420/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2040439 - already processed (2421/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 1901865 - already processed (2422/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 1905283 - already processed (2423/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2042107 - already processed (2424/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 1986270 - already processed (2425/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2044713 - already processed (2426/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2041468 - already processed (2427/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 1983900 - already processed (2428/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2020217 - already processed (2429/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2038216 - already processed (2430/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2043604 - already processed (2431/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2045365 - already processed (2432/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2043961 - already processed (2433/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2044138 - already processed (2434/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2040354 - already processed (2435/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2053157 - already processed (2436/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 1984221 - already processed (2437/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2033224 - already processed (2438/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2033186 - already processed (2439/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 1970505 - already processed (2440/2608) 2025-12-03 11:08:48,384 [INFO] Skipping bill 2036132 - already processed (2441/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2033542 - already processed (2442/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2027361 - already processed (2443/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2040866 - already processed (2444/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2043357 - already processed (2445/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2041757 - already processed (2446/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2042653 - already processed (2447/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2043161 - already processed (2448/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2052989 - already processed (2449/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 1965963 - already processed (2450/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2045735 - already processed (2451/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 1999388 - already processed (2452/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2051352 - already processed (2453/2608) 2025-12-03 11:08:48,385 [INFO] Skipping bill 2051886 - already processed (2454/2608) 2025-12-03 11:08:48,385 [INFO] Processing 2455/2608: Bill ID 2039530 2025-12-03 11:08:49,988 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:08:49,990 [ERROR] Failed to generate report for bill 2039530: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 640978 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:08:51,000 [INFO] Skipping bill 2043562 - already processed (2456/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 1970493 - already processed (2457/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2037978 - already processed (2458/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2040318 - already processed (2459/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2041104 - already processed (2460/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2043947 - already processed (2461/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2038111 - already processed (2462/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 1982722 - already processed (2463/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2043896 - already processed (2464/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2012870 - already processed (2465/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2007066 - already processed (2466/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 1968860 - already processed (2467/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2029307 - already processed (2468/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2041255 - already processed (2469/2608) 2025-12-03 11:08:51,006 [INFO] Skipping bill 2033191 - already processed (2470/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2043715 - already processed (2471/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2036439 - already processed (2472/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 1968282 - already processed (2473/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2039688 - already processed (2474/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2038212 - already processed (2475/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 1987966 - already processed (2476/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2031847 - already processed (2477/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 1970497 - already processed (2478/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 1963353 - already processed (2479/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2046183 - already processed (2480/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2005587 - already processed (2481/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2039178 - already processed (2482/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2041269 - already processed (2483/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 2043688 - already processed (2484/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 1927158 - already processed (2485/2608) 2025-12-03 11:08:51,007 [INFO] Skipping bill 1987972 - already processed (2486/2608) 2025-12-03 11:08:51,008 [INFO] Skipping bill 2035895 - already processed (2487/2608) 2025-12-03 11:08:51,008 [INFO] Skipping bill 2037256 - already processed (2488/2608) 2025-12-03 11:08:51,008 [INFO] Skipping bill 2043043 - already processed (2489/2608) 2025-12-03 11:08:51,008 [INFO] Skipping bill 2031888 - already processed (2490/2608) 2025-12-03 11:08:51,008 [INFO] Skipping bill 2043344 - already processed (2491/2608) 2025-12-03 11:08:51,008 [INFO] Skipping bill 2043890 - already processed (2492/2608) 2025-12-03 11:08:51,008 [INFO] Skipping bill 1936780 - already processed (2493/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2023141 - already processed (2494/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2022467 - already processed (2495/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2022582 - already processed (2496/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 1970488 - already processed (2497/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 1988006 - already processed (2498/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 1933954 - already processed (2499/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 1955921 - already processed (2500/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 1963338 - already processed (2501/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2015697 - already processed (2502/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2020008 - already processed (2503/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2021940 - already processed (2504/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2022593 - already processed (2505/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2026569 - already processed (2506/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2027464 - already processed (2507/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2018800 - already processed (2508/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2028784 - already processed (2509/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2029580 - already processed (2510/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2031938 - already processed (2511/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2032128 - already processed (2512/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 1947775 - already processed (2513/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2035420 - already processed (2514/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2037229 - already processed (2515/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2039570 - already processed (2516/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2042103 - already processed (2517/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2043758 - already processed (2518/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2046719 - already processed (2519/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2052024 - already processed (2520/2608) 2025-12-03 11:08:51,018 [INFO] Skipping bill 2052050 - already processed (2521/2608) 2025-12-03 11:08:51,018 [INFO] Processing 2522/2608: Bill ID 2056120 2025-12-03 11:09:15,898 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-03 11:09:15,903 [INFO] Skipping bill 1979616 - already processed (2523/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2053486 - already processed (2524/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2019782 - already processed (2525/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2017847 - already processed (2526/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2018869 - already processed (2527/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2040352 - already processed (2528/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2029980 - already processed (2529/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2018578 - already processed (2530/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2043696 - already processed (2531/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2008600 - already processed (2532/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2037247 - already processed (2533/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2037249 - already processed (2534/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2035609 - already processed (2535/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2038921 - already processed (2536/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2053374 - already processed (2537/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2021715 - already processed (2538/2608) 2025-12-03 11:09:15,903 [INFO] Skipping bill 2021641 - already processed (2539/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 1901818 - already processed (2540/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2023062 - already processed (2541/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2044841 - already processed (2542/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2043173 - already processed (2543/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 1948187 - already processed (2544/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2038257 - already processed (2545/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2053381 - already processed (2546/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2053499 - already processed (2547/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2053841 - already processed (2548/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2054336 - already processed (2549/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2054344 - already processed (2550/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2037277 - already processed (2551/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 1941772 - already processed (2552/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2043199 - already processed (2553/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2041162 - already processed (2554/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2038970 - already processed (2555/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2039918 - already processed (2556/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2032140 - already processed (2557/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2029941 - already processed (2558/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2038420 - already processed (2559/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 1943770 - already processed (2560/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 1979653 - already processed (2561/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 1970677 - already processed (2562/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 1988332 - already processed (2563/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 1939613 - already processed (2564/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2043104 - already processed (2565/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2000425 - already processed (2566/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2028805 - already processed (2567/2608) 2025-12-03 11:09:15,904 [INFO] Skipping bill 2023111 - already processed (2568/2608) 2025-12-03 11:09:15,904 [INFO] Processing 2569/2608: Bill ID 2032901 2025-12-03 11:09:17,022 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:09:17,024 [ERROR] Failed to generate report for bill 2032901: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 455298 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:09:18,033 [INFO] Skipping bill 2051603 - already processed (2570/2608) 2025-12-03 11:09:18,033 [INFO] Skipping bill 2036437 - already processed (2571/2608) 2025-12-03 11:09:18,033 [INFO] Skipping bill 2036475 - already processed (2572/2608) 2025-12-03 11:09:18,033 [INFO] Skipping bill 2032059 - already processed (2573/2608) 2025-12-03 11:09:18,033 [INFO] Skipping bill 2007053 - already processed (2574/2608) 2025-12-03 11:09:18,034 [INFO] Skipping bill 2000456 - already processed (2575/2608) 2025-12-03 11:09:18,034 [INFO] Skipping bill 1958611 - already processed (2576/2608) 2025-12-03 11:09:18,034 [INFO] Skipping bill 2016811 - already processed (2577/2608) 2025-12-03 11:09:18,034 [INFO] Skipping bill 1926891 - already processed (2578/2608) 2025-12-03 11:09:18,034 [INFO] Skipping bill 1943799 - already processed (2579/2608) 2025-12-03 11:09:18,035 [INFO] Skipping bill 2039061 - already processed (2580/2608) 2025-12-03 11:09:18,035 [INFO] Skipping bill 1961580 - already processed (2581/2608) 2025-12-03 11:09:18,035 [INFO] Skipping bill 1927000 - already processed (2582/2608) 2025-12-03 11:09:18,035 [INFO] Skipping bill 2023233 - already processed (2583/2608) 2025-12-03 11:09:18,035 [INFO] Processing 2584/2608: Bill ID 2053561 2025-12-03 11:09:29,823 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" 2025-12-03 11:09:29,826 [INFO] Skipping bill 1947802 - already processed (2585/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 2022615 - already processed (2586/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 2022439 - already processed (2587/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 2033390 - already processed (2588/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 2026636 - already processed (2589/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 2047438 - already processed (2590/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 2036925 - already processed (2591/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 1963365 - already processed (2592/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 2043448 - already processed (2593/2608) 2025-12-03 11:09:29,826 [INFO] Skipping bill 1994349 - already processed (2594/2608) 2025-12-03 11:09:29,827 [INFO] Skipping bill 2023224 - already processed (2595/2608) 2025-12-03 11:09:29,827 [INFO] Skipping bill 2028140 - already processed (2596/2608) 2025-12-03 11:09:29,827 [INFO] Skipping bill 2032003 - already processed (2597/2608) 2025-12-03 11:09:29,827 [INFO] Skipping bill 2039157 - already processed (2598/2608) 2025-12-03 11:09:29,827 [INFO] Skipping bill 2044179 - already processed (2599/2608) 2025-12-03 11:09:29,827 [INFO] Skipping bill 2035673 - already processed (2600/2608) 2025-12-03 11:09:29,827 [INFO] Skipping bill 2044473 - already processed (2601/2608) 2025-12-03 11:09:29,827 [INFO] Processing 2602/2608: Bill ID 1990400 2025-12-03 11:09:30,480 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:09:30,481 [ERROR] Failed to generate report for bill 1990400: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 256134 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:09:31,495 [INFO] Skipping bill 2027724 - already processed (2603/2608) 2025-12-03 11:09:31,495 [INFO] Processing 2604/2608: Bill ID 2028171 2025-12-03 11:09:32,002 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:09:32,005 [ERROR] Failed to generate report for bill 2028171: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134551 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 134551 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:09:33,008 [INFO] Processing 2605/2608: Bill ID 1966444 2025-12-03 11:09:33,609 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:09:33,610 [ERROR] Failed to generate report for bill 1966444: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 171945 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:09:34,619 [INFO] Processing 2606/2608: Bill ID 2038906 2025-12-03 11:09:35,252 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:09:35,254 [ERROR] Failed to generate report for bill 2038906: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 192175 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:09:36,262 [INFO] Processing 2607/2608: Bill ID 1994544 2025-12-03 11:09:36,746 [INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 400 Bad Request" 2025-12-03 11:09:36,748 [ERROR] Failed to generate report for bill 1994544: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 197, in create_reports_with_resume report = create_detailed_report(bill, llm=llm) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/data_updating_scripts/generate_reports.py", line 109, in create_detailed_report result = chain.invoke({"bill_json": bill_json}) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3246, in invoke input_ = context.run(step.invoke, input_, config) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke self.generate_prompt( ~~~~~~~~~~~~~~~~~~~~^ [self._convert_input(input)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<6 lines>... **kwargs, ^^^^^^^^^ ).generations[0][0], ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1025, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 842, in generate self._generate_with_cache( ~~~~~~~~~~~~~~~~~~~~~~~~~^ m, ^^ ...<2 lines>... **kwargs, ^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1091, in _generate_with_cache result = self._generate( messages, stop=stop, run_manager=run_manager, **kwargs ) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1213, in _generate raise e File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/langchain_openai/chat_models/base.py", line 1208, in _generate raw_response = self.client.with_raw_response.create(**payload) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ~~~~^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_utils/_utils.py", line 286, in wrapper return func(*args, **kwargs) File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/resources/chat/completions/completions.py", line 1156, in create return self._post( ~~~~~~~~~~^ "/chat/completions", ^^^^^^^^^^^^^^^^^^^^ ...<46 lines>... stream_cls=Stream[ChatCompletionChunk], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1259, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rowanamanna/Documents/Vanderbilt/Research/ai-legislation-tracker/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1047, in request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 188475 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} 2025-12-03 11:09:37,753 [INFO] Skipping bill 2041289 - already processed (2608/2608) 2025-12-03 11:09:37,800 [INFO] Saved 2608 reports to data/bill_reports.json 2025-12-03 11:09:37,801 [INFO] Report generation complete! 2025-12-03 11:09:37,801 [INFO] Total bills: 2608 2025-12-03 11:09:37,801 [INFO] Successfully processed: 3 2025-12-03 11:09:37,801 [INFO] Skipped (already done): 2487 2025-12-03 11:09:37,801 [INFO] Errors: 118