in_source_id
stringlengths
13
58
issue
stringlengths
3
241k
before_files
listlengths
0
3
after_files
listlengths
0
3
pr_diff
stringlengths
109
107M
โŒ€
vllm-project__vllm-3211
Support for grammar It would be highly beneficial if the library could incorporate support for Grammar and GBNF files. https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md
[ { "content": "# Copyright 2024- the Outlines developers\n# This file is adapted from\n# https://github.com/outlines-dev/outlines/blob/main/outlines/serve/vllm.py\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may...
[ { "content": "# Copyright 2024- the Outlines developers\n# This file is adapted from\n# https://github.com/outlines-dev/outlines/blob/main/outlines/serve/vllm.py\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may...
diff --git a/tests/entrypoints/test_openai_server.py b/tests/entrypoints/test_openai_server.py index a5b2bf4c0f0c..86d9a85af80b 100644 --- a/tests/entrypoints/test_openai_server.py +++ b/tests/entrypoints/test_openai_server.py @@ -660,5 +660,55 @@ async def test_guided_decoding_type_error(server, client: openai.AsyncOp...
vllm-project__vllm-3239
Automatic Prefix Caching Bug If I enable automatic prefix caching, it occasionally crashes. ``` Future exception was never retrieved future: <Future finished exception=RuntimeError('step must be nonzero')> Traceback (most recent call last): File "/root/vllm/vllm/engine/async_llm_engine.py", line 29, in _raise_ex...
[ { "content": "\"\"\"A block manager that manages token blocks.\"\"\"\nimport enum\nfrom itertools import count\nfrom os.path import commonprefix\nfrom typing import Dict, List, Optional, Set, Tuple\n\nfrom vllm.block import BlockTable, PhysicalTokenBlock\nfrom vllm.sequence import Sequence, SequenceGroup, Seque...
[ { "content": "\"\"\"A block manager that manages token blocks.\"\"\"\nimport enum\nfrom itertools import count, takewhile\nfrom os.path import commonprefix\nfrom typing import Dict, List, Optional, Set, Tuple\n\nfrom vllm.block import BlockTable, PhysicalTokenBlock\nfrom vllm.sequence import Sequence, SequenceG...
diff --git a/tests/engine/test_computed_prefix_blocks.py b/tests/engine/test_computed_prefix_blocks.py new file mode 100644 index 000000000000..ed35212cc3f1 --- /dev/null +++ b/tests/engine/test_computed_prefix_blocks.py @@ -0,0 +1,34 @@ +import pytest + +from vllm.engine.arg_utils import EngineArgs +from vllm.engine.l...
vllm-project__vllm-4109
[Feature]: Update Outlines Integration from `FSM` to `Guide` ### ๐Ÿš€ The feature, motivation and pitch Recently outlines updated their interface from FSM to Guide to support "acceleration"/"fast-forward" which will output next sets of tokens if they are directly available. For JSON schema, the cases are the keys, the `...
[ { "content": "import asyncio\nimport concurrent.futures\nfrom copy import copy\nfrom enum import Enum\nfrom functools import lru_cache\nfrom json import dumps as json_dumps\nfrom re import escape as regex_escape\nfrom typing import Tuple, Union\n\nfrom pydantic import BaseModel\nfrom transformers import PreTrai...
[ { "content": "import asyncio\nimport concurrent.futures\nfrom enum import Enum\nfrom json import dumps as json_dumps\nfrom re import escape as regex_escape\nfrom typing import Tuple, Union\n\nfrom pydantic import BaseModel\nfrom transformers import PreTrainedTokenizerBase\n\nfrom vllm.entrypoints.openai.protoco...
diff --git a/requirements-common.txt b/requirements-common.txt index f41873570aa6..bf9987e3af01 100644 --- a/requirements-common.txt +++ b/requirements-common.txt @@ -17,6 +17,6 @@ prometheus_client >= 0.18.0 prometheus-fastapi-instrumentator >= 7.0.0 tiktoken >= 0.6.0 # Required for DBRX tokenizer lm-format-enforc...
vllm-project__vllm-4128
[Bug][Chunked prefill]: head size has to be power of two ### ๐Ÿ› Describe the bug The chunked prefill doesn't support head sizes that are not powers of two. For example, phi2 has head size of 80 (which is supported by flash attn, but the _flash_fwd triton kernel doesn't support it). Fix PR is coming. ```python ...
[ { "content": "# The kernels in this file are adapted from LightLLM's context_attention_fwd:\n# https://github.com/ModelTC/lightllm/blob/main/lightllm/models/llama/triton_kernel/context_flashattention_nopad.py\n\nimport torch\nimport triton\nimport triton.language as tl\n\nif triton.__version__ >= \"2.1.0\":\n\n...
[ { "content": "# The kernels in this file are adapted from LightLLM's context_attention_fwd:\n# https://github.com/ModelTC/lightllm/blob/main/lightllm/models/llama/triton_kernel/context_flashattention_nopad.py\n\nimport torch\nimport triton\nimport triton.language as tl\n\nif triton.__version__ >= \"2.1.0\":\n\n...
diff --git a/tests/kernels/test_prefix_prefill.py b/tests/kernels/test_prefix_prefill.py index 6494fb34af98..ad31b0a7c2a1 100644 --- a/tests/kernels/test_prefix_prefill.py +++ b/tests/kernels/test_prefix_prefill.py @@ -10,7 +10,7 @@ NUM_HEADS = [64] NUM_QUERIES_PER_KV = [1, 8, 64] -HEAD_SIZES = [128] +HEAD_SIZES = ...
vllm-project__vllm-4219
[Doc]: Engine arguments of lora are not shown in VLLM docs on homepage ### ๐Ÿ“š The doc issue The lora arguments are not shown on this page. https://docs.vllm.ai/en/latest/models/engine_args.html ### Suggest a potential alternative/fix Add latest documentation from source code
[ { "content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup -------------------------------------------...
[ { "content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup -------------------------------------------...
diff --git a/docs/source/conf.py b/docs/source/conf.py index 19cc8557a754..cfa956b143ba 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -11,12 +11,14 @@ # documentation root, use os.path.abspath to make it absolute, like shown here. import logging +import os import sys from typing import List fro...
vllm-project__vllm-5319
[Feature]: support `stream_options` option ### ๐Ÿš€ The feature, motivation and pitch According to openAI doc: https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options. The API provide the stream_options which can get token usage info for stream request. ### Alternatives _No response_ ##...
[ { "content": "import time\nfrom typing import (AsyncGenerator, AsyncIterator, Callable, Dict, List,\n Optional)\nfrom typing import Sequence as GenericSequence\nfrom typing import Tuple\n\nfrom fastapi import Request\n\nfrom vllm.config import ModelConfig\nfrom vllm.engine.async_llm_engine im...
[ { "content": "import time\nfrom typing import (AsyncGenerator, AsyncIterator, Callable, Dict, List,\n Optional)\nfrom typing import Sequence as GenericSequence\nfrom typing import Tuple\n\nfrom fastapi import Request\n\nfrom vllm.config import ModelConfig\nfrom vllm.engine.async_llm_engine im...
diff --git a/tests/entrypoints/test_openai_server.py b/tests/entrypoints/test_openai_server.py index b7d0946ba724..d0fe08ae0ddd 100644 --- a/tests/entrypoints/test_openai_server.py +++ b/tests/entrypoints/test_openai_server.py @@ -478,8 +478,6 @@ async def test_completion_streaming(server, client: openai.AsyncOpenAI, ...
pyro-ppl__numpyro-747
Port autoguide AutoNormal from Pyro I am trying to fit a bnn in #743 with ELBO loss but couldn't optimize it with stochastic KL. It might work if I use AutoNormal with TraceMeanField_ELBO, so I would like to add this feature to try that option.
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\nfrom collections import namedtuple\nfrom functools import update_wrapper\nimport math\n\nfrom jax import jit, lax, random, vmap\nfrom jax.dtypes import canonicalize_dtype\nfrom jax.lib import xla_bridge\nimport j...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\nfrom collections import namedtuple\nfrom functools import update_wrapper\nimport math\n\nfrom jax import jit, lax, random, vmap\nfrom jax.dtypes import canonicalize_dtype\nfrom jax.lib import xla_bridge\nimport j...
diff --git a/docs/source/autoguide.rst b/docs/source/autoguide.rst index 37e45dba4..bec58cfcb 100644 --- a/docs/source/autoguide.rst +++ b/docs/source/autoguide.rst @@ -58,3 +58,11 @@ AutoLowRankMultivariateNormal :undoc-members: :show-inheritance: :member-order: bysource + +AutoNormal +---------- +.. au...
pyro-ppl__numpyro-1581
inf's with TruncatedNormal I've seen the discussion in #1184 and #1185, but I'm still seeing this issue with numpyro v0.10.1. Here's a MWE, comparing to scipy's `scipy.stats.truncnorm` implementation: ```python import numpyro numpyro.enable_x64() import numpyro.distributions as dist import jax.numpy as jnp from s...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\n# The implementation largely follows the design in PyTorch's `torch.distributions`\n#\n# Copyright (c) 2016- Facebook, Inc (Adam Paszke)\n# Copyright (c) 2014- Facebook, Inc (Soumi...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\n# The implementation largely follows the design in PyTorch's `torch.distributions`\n#\n# Copyright (c) 2016- Facebook, Inc (Adam Paszke)\n# Copyright (c) 2014- Facebook, Inc (Soumi...
diff --git a/numpyro/distributions/continuous.py b/numpyro/distributions/continuous.py index 15d9ac412..c7e20c97a 100644 --- a/numpyro/distributions/continuous.py +++ b/numpyro/distributions/continuous.py @@ -47,6 +47,7 @@ xlog1py, xlogy, ) +from jax.scipy.stats import norm as jax_norm from numpyro.distri...
pyro-ppl__numpyro-984
Allow to set an additional max tree depth during warmup phase This could be useful (needs some experiments though) in cases NUTS trajectories are long and it is slow to collect samples to estimate mass matrix during warmup phase. The side effect can be warmup samples are more correlated, so the estimated mass matrix mi...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom collections import OrderedDict, namedtuple\n\nfrom jax import grad, jacfwd, random, value_and_grad, vmap\nfrom jax.flatten_util import ravel_pytree\nimport jax.numpy as jnp\nfrom jax.ops import index_updat...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom collections import OrderedDict, namedtuple\n\nfrom jax import grad, jacfwd, random, value_and_grad, vmap\nfrom jax.flatten_util import ravel_pytree\nimport jax.numpy as jnp\nfrom jax.ops import index_updat...
diff --git a/numpyro/infer/hmc.py b/numpyro/infer/hmc.py index 4f5ae2f65..a771d5059 100644 --- a/numpyro/infer/hmc.py +++ b/numpyro/infer/hmc.py @@ -247,7 +247,9 @@ def init_kernel( :param float trajectory_length: Length of a MCMC trajectory for HMC. Default value is :math:`2\\pi`. :param...
pyro-ppl__numpyro-1547
The initial value of the run function is invalid. In the MCMC class, the folllowing initialization is succeed. It correctly starts from the initial value. ``` kernel = numpyro.infer.NUTS( model, max_tree_depth=max_tree_depth, init_strategy...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom collections import OrderedDict, namedtuple\nfrom functools import partial\nimport math\nimport os\n\nfrom jax import device_put, lax, random, vmap\nfrom jax.flatten_util import ravel_pytree\nimport jax.num...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom collections import OrderedDict, namedtuple\nfrom functools import partial\nimport math\nimport os\n\nfrom jax import device_put, lax, random, vmap\nfrom jax.flatten_util import ravel_pytree\nimport jax.num...
diff --git a/numpyro/infer/hmc.py b/numpyro/infer/hmc.py index 2b5cf3350..aa1aae392 100644 --- a/numpyro/infer/hmc.py +++ b/numpyro/infer/hmc.py @@ -649,7 +649,12 @@ def __init__( def _init_state(self, rng_key, model_args, model_kwargs, init_params): if self._model is not None: - init_params,...
pyro-ppl__numpyro-733
Add a better error message for how to debug "Cannot find valid initial parameters" issue Recently, several users got this issue but it is not easy to diagnose the issue just by looking at the `model`. We should add a better error message, e.g. to mention about `numpyro.validation_enabled()` utility, which tells us at w...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\n# The implementation follows the design in PyTorch: torch.distributions.distribution.py\n#\n# Copyright (c) 2016- Facebook, Inc (Adam Paszke)\n# Copyright (c) 2014- Facebook, Inc (...
[ { "content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\n# The implementation follows the design in PyTorch: torch.distributions.distribution.py\n#\n# Copyright (c) 2016- Facebook, Inc (Adam Paszke)\n# Copyright (c) 2014- Facebook, Inc (...
diff --git a/numpyro/distributions/distribution.py b/numpyro/distributions/distribution.py index c245790c2..82a6c2b41 100644 --- a/numpyro/distributions/distribution.py +++ b/numpyro/distributions/distribution.py @@ -141,7 +141,8 @@ def __init__(self, batch_shape=(), event_shape=(), validate_args=None): ...