Problem running model on g6.12xlarge
#2
by
markba
- opened
I'm running this model on g6.12xlarge with TGI.
The first 3 requests pass correct and the 4th request fails together with the server.
The requests' content sizes in tokens are: 185, 657, 358, 1981.
This is the error:
Request failed during generation: Server error: Unexpected <class 'RuntimeError'>: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
This is the full log file:
timestamp,message
1742208091429,2025-03-17T10:41:31.429096Z INFO text_generation_launcher: Args {
1742208091429," model_id: ""nvidia/Llama-3.1-70B-Instruct-FP8"","
1742208091429," revision: None,"
1742208091429," validation_workers: 2,"
1742208091429," sharded: None,"
1742208091429," num_shard: None,"
1742208091429," quantize: None,"
1742208091429," speculate: None,"
1742208091429," dtype: None,"
1742208091429," kv_cache_dtype: None,"
1742208091429," trust_remote_code: true,"
1742208091429," max_concurrent_requests: 128,"
1742208091429," max_best_of: 5,"
1742208091429," max_stop_sequences: 4,"
1742208091429," max_top_n_tokens: 5,"
1742208091429, max_input_tokens: Some(
1742208091429," 4096,"
1742208091429," ),"
1742208091429," max_input_length: None,"
1742208091429," max_total_tokens: None,"
1742208091429," waiting_served_ratio: 0.3,"
1742208091429," max_batch_prefill_tokens: None,"
1742208091429," max_batch_total_tokens: None,"
1742208091429," max_waiting_tokens: 20,"
1742208091429," max_batch_size: None,"
1742208091429," cuda_graphs: None,"
1742208091429," hostname: ""327a9816a326"","
1742208091429," port: 8080,"
1742208091429," shard_uds_path: ""/tmp/text-generation-server"","
1742208091429," master_addr: ""localhost"","
1742208091429," master_port: 29500,"
1742208091429, huggingface_hub_cache: Some(
1742208091429," ""/mnt/hf-efs"","
1742208091429," ),"
1742208091429," weights_cache_override: None,"
1742208091429," disable_custom_kernels: false,"
1742208091429," cuda_memory_fraction: 1.0,"
1742208091429," rope_scaling: None,"
1742208091429," rope_factor: None,"
1742208091429," json_output: false,"
1742208091429," otlp_endpoint: None,"
1742208091429," otlp_service_name: ""text-generation-inference.router"","
1742208091429," cors_allow_origin: [],"
1742208091429," watermark_gamma: None,"
1742208091429," watermark_delta: None,"
1742208091429," ngrok: false,"
1742208091429," ngrok_authtoken: None,"
1742208091429," ngrok_edge: None,"
1742208091429," tokenizer_config_path: None,"
1742208091429," disable_grammar_support: false,"
1742208091429," env: false,"
1742208091429," max_client_batch_size: 4,"
1742208091429," lora_adapters: None,"
1742208091429," usage_stats: Off,"
1742208091429," payload_limit: 2000000,"
1742208091429," enable_prefill_logprobs: false,"
1742208091429,}
1742208093519,2025-03-17T10:41:33.519755Z INFO text_generation_launcher: Using attention flashinfer - Prefix caching true
1742208093519,2025-03-17T10:41:33.519789Z INFO text_generation_launcher: Sharding model on 4 processes
1742208093612,2025-03-17T10:41:33.612111Z WARN text_generation_launcher: Not enough VRAM to run the model: Available: 91.78GB - Model 127.68GB.
1742208093612,2025-03-17T10:41:33.612134Z INFO text_generation_launcher: Default `max_batch_prefill_tokens` to 3278
1742208093612,"2025-03-17T10:41:33.612139Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]"
1742208093612,2025-03-17T10:41:33.612142Z WARN text_generation_launcher: `trust_remote_code` is set. Trusting that model `nvidia/Llama-3.1-70B-Instruct-FP8` do not contain malicious code.
1742208093612,2025-03-17T10:41:33.612282Z INFO download: text_generation_launcher: Starting check and download process for nvidia/Llama-3.1-70B-Instruct-FP8
1742208098034,2025-03-17T10:41:38.034255Z INFO text_generation_launcher: Files are already present on the host. Skipping download.
1742208098633,2025-03-17T10:41:38.633218Z INFO download: text_generation_launcher: Successfully downloaded weights for nvidia/Llama-3.1-70B-Instruct-FP8
1742208098633,2025-03-17T10:41:38.633729Z INFO shard-manager: text_generation_launcher: Starting shard rank=0
1742208099790,2025-03-17T10:41:39.789880Z INFO shard-manager: text_generation_launcher: Starting shard rank=1
1742208100639,2025-03-17T10:41:40.639510Z INFO shard-manager: text_generation_launcher: Starting shard rank=2
1742208101487,2025-03-17T10:41:41.487373Z INFO shard-manager: text_generation_launcher: Starting shard rank=3
1742208103447,2025-03-17T10:41:43.447680Z INFO text_generation_launcher: Using prefix caching = True
1742208103447,2025-03-17T10:41:43.447914Z INFO text_generation_launcher: Using Attention = flashinfer
1742208108655,2025-03-17T10:41:48.655567Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
1742208109811,2025-03-17T10:41:49.811134Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
...
1742208201018,2025-03-17T10:43:21.018877Z INFO text_generation_launcher: Starting Webserver
1742208201087,2025-03-17T10:43:21.087125Z INFO text_generation_router_v3: backends/v3/src/lib.rs:125: Warming up model
1742208201109,2025-03-17T10:43:21.109140Z INFO text_generation_launcher: Using optimized Triton indexing kernels.
1742208207237,"2025-03-17T10:43:27.237112Z INFO text_generation_launcher: KV-cache blocks: 28758, size: 1"
1742208207294,"2025-03-17T10:43:27.294336Z INFO text_generation_launcher: Cuda Graphs are enabled for sizes [32, 16, 8, 4, 2, 1]"
1742208209786,2025-03-17T10:43:29.786546Z INFO text_generation_router_v3: backends/v3/src/lib.rs:137: Setting max batch total tokens to 28758
1742208209786,2025-03-17T10:43:29.786576Z WARN text_generation_router_v3::backend: backends/v3/src/backend.rs:39: Model supports prefill chunking. `waiting_served_ratio` and `max_waiting_tokens` will be ignored.
1742208209786,2025-03-17T10:43:29.786594Z INFO text_generation_router_v3: backends/v3/src/lib.rs:166: Using backend V3
1742208209786,2025-03-17T10:43:29.786598Z INFO text_generation_router: backends/v3/src/main.rs:168: Maximum total tokens defaulted to 28758
1742208209786,2025-03-17T10:43:29.786631Z INFO text_generation_router::server: router/src/server.rs:1560: Using the Hugging Face API
1742208210370,2025-03-17T10:43:30.370559Z INFO text_generation_router::server: router/src/server.rs:2309: Serving revision 0088b4e1785211770510487884b15c711dcdde99 of model nvidia/Llama-3.1-70B-Instruct-FP8
1742208210370,"2025-03-17T10:43:30.370638Z WARN text_generation_router::server: router/src/server.rs:1648: Tokenizer_config None - Some(""/mnt/hf-efs/models--nvidia--Llama-3.1-70B-Instruct-FP8/snapshots/0088b4e1785211770510487884b15c711dcdde99/tokenizer_config.json"")"
1742208214682,2025-03-17T10:43:34.682662Z INFO text_generation_router::server: router/src/server.rs:1716: Using config Some(Llama)
1742208214682,"2025-03-17T10:43:34.682758Z WARN text_generation_router::server: router/src/server.rs:1879: Invalid hostname, defaulting to 0.0.0.0"
1742208214742,2025-03-17T10:43:34.742073Z INFO text_generation_router::server: router/src/server.rs:2266: Connected
1742208441229,2025-03-17T10:47:21.229028Z INFO text_generation_router_v3::radix: backends/v3/src/radix.rs:108: Prefix 0 - Suffix 1535
1742208442098,"2025-03-17T10:47:22.097951Z INFO chat_completions{parameters=""GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, frequency_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: true, max_new_tokens: None, return_full_text: None, stop: [], truncate: None, watermark: false, details: true, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None, adapter_id: None }"" total_time=""873.239506ms"" validation_time=""4.304474ms"" queue_time=""105.703µs"" inference_time=""868.829539ms"" time_per_token=""434.414769ms"" seed=""Some(10901684915517885611)""}: text_generation_router::server: router/src/server.rs:424: Success"
1742208479693,2025-03-17T10:47:59.692752Z INFO text_generation_router_v3::radix: backends/v3/src/radix.rs:108: Prefix 349 - Suffix 1186
1742208480726,"2025-03-17T10:48:00.726550Z INFO chat_completions{parameters=""GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, frequency_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: true, max_new_tokens: None, return_full_text: None, stop: [], truncate: None, watermark: false, details: true, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None, adapter_id: None }"" total_time=""1.036108053s"" validation_time=""2.266644ms"" queue_time=""144.874µs"" inference_time=""1.033696715s"" time_per_token=""516.848357ms"" seed=""Some(17494873033685286893)""}: text_generation_router::server: router/src/server.rs:424: Success"
1742208510250,2025-03-17T10:48:30.250442Z INFO text_generation_router_v3::radix: backends/v3/src/radix.rs:108: Prefix 350 - Suffix 1185
1742208511007,"2025-03-17T10:48:31.007713Z INFO chat_completions{parameters=""GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, frequency_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: true, max_new_tokens: None, return_full_text: None, stop: [], truncate: None, watermark: false, details: true, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None, adapter_id: None }"" total_time=""759.195475ms"" validation_time=""1.873315ms"" queue_time=""139.033µs"" inference_time=""757.183337ms"" time_per_token=""378.591668ms"" seed=""Some(15189989132842721754)""}: text_generation_router::server: router/src/server.rs:424: Success"
1742208571798,2025-03-17T10:49:31.797898Z INFO text_generation_router_v3::radix: backends/v3/src/radix.rs:108: Prefix 350 - Suffix 1185
1742208571815,2025-03-17T10:49:31.815485Z ERROR text_generation_launcher: Method Prefill encountered an error.
1742208571815,Traceback (most recent call last):
1742208571815," File ""/usr/src/.venv/bin/text-generation-server"", line 10, in <module>"
1742208571815, sys.exit(app())
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/typer/main.py"", line 323, in __call__"
1742208571815," return get_command(self)(*args, **kwargs)"
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/click/core.py"", line 1161, in __call__"
1742208571815," return self.main(*args, **kwargs)"
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/typer/core.py"", line 743, in main"
1742208571815, return _main(
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/typer/core.py"", line 198, in _main"
1742208571815, rv = self.invoke(ctx)
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/click/core.py"", line 1697, in invoke"
1742208571815, return _process_result(sub_ctx.command.invoke(sub_ctx))
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/click/core.py"", line 1443, in invoke"
1742208571815," return ctx.invoke(self.callback, **ctx.params)"
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/click/core.py"", line 788, in invoke"
1742208571815," return __callback(*args, **kwargs)"
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/typer/main.py"", line 698, in wrapper"
1742208571815, return callback(**use_params)
1742208571815," File ""/usr/src/server/text_generation_server/cli.py"", line 119, in serve"
1742208571815, server.serve(
1742208571815," File ""/usr/src/server/text_generation_server/server.py"", line 315, in serve"
1742208571815, asyncio.run(
1742208571815," File ""/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py"", line 190, in run"
1742208571815, return runner.run(main)
1742208571815," File ""/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py"", line 118, in run"
1742208571815, return self._loop.run_until_complete(task)
1742208571815," File ""/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py"", line 641, in run_until_complete"
1742208571815, self.run_forever()
1742208571815," File ""/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py"", line 608, in run_forever"
1742208571815, self._run_once()
1742208571815," File ""/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py"", line 1936, in _run_once"
1742208571815, handle._run()
1742208571815," File ""/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py"", line 84, in _run"
1742208571815," self._context.run(self._callback, *self._args)"
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py"", line 165, in invoke_intercept_method"
1742208571815, return await self.intercept(
1742208571815,"> File ""/usr/src/server/text_generation_server/interceptor.py"", line 24, in intercept"
1742208571815, return await response
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py"", line 120, in _unary_interceptor"
1742208571815, raise error
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py"", line 111, in _unary_interceptor"
1742208571815," return await behavior(request_or_iterator, context)"
1742208571815," File ""/usr/src/server/text_generation_server/server.py"", line 183, in Prefill"
1742208571815," generations, next_batch, timings = self.model.generate_token(batch)"
1742208571815," File ""/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py"", line 81, in inner"
1742208571815," return func(*args, **kwds)"
1742208571815," File ""/usr/src/server/text_generation_server/models/flash_causal_lm.py"", line 1971, in generate_token"
1742208571815," out, speculative_logits = self.forward(batch, adapter_data)"
1742208571815," File ""/usr/src/server/text_generation_server/models/flash_causal_lm.py"", line 1853, in forward"
1742208571815, with self._forward_context(
1742208571815," File ""/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py"", line 137, in __enter__"
1742208571815, return next(self.gen)
1742208571815," File ""/usr/src/server/text_generation_server/layers/attention/flashinfer.py"", line 87, in use_prefill_with_paged_kv_state"
1742208571815, state.plan(
1742208571815," File ""/usr/src/.venv/lib/python3.11/site-packages/flashinfer/prefill.py"", line 1306, in plan"
1742208571815," qo_indptr_host = qo_indptr.to(""cpu"")"
1742208571815,RuntimeError: CUDA error: device-side assert triggered
1742208571815,"CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect."
1742208571815,For debugging consider passing CUDA_LAUNCH_BLOCKING=1
1742208571815,Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
1742208571815,2025-03-17T10:49:31.815844Z ERROR batch{batch_size=1}:prefill:prefill{id=3 size=1}:prefill{id=3 size=1}: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Unexpected <class 'RuntimeError'>: CUDA error: device-side assert triggered
1742208571815,"CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect."
1742208571815,For debugging consider passing CUDA_LAUNCH_BLOCKING=1
1742208571815,Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
1742208571823,2025-03-17T10:49:31.822909Z ERROR text_generation_launcher: Method Prefill encountered an error.
1742208571823,Traceback (most recent call last):