repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/tutorials | 259 | Confirm if batch training in seq2seq tutorial? | for the tutorial _pytorch->tutorials/intermediate_source/seq2seq_translation_tutorial.py_: [https://github.com/pytorch/tutorials/blob/master/intermediate_source/seq2seq_translation_tutorial.py](url)
According to lines 636-646, It seems like it is training with one sentence at a time, instead of batch training. Am I understanding it right? | https://github.com/pytorch/tutorials/issues/259 | closed | [] | 2018-06-14T04:55:06Z | 2021-07-30T23:01:59Z | 3 | ecilay |
pytorch/examples | 373 | float16 mixed precision training on Titan V is slower than float32 | Since I cannot find a place to download imagenet dataset, I modified mnist example to support float16 training, please see the code in https://github.com/qigtang/examples.git, commit ed095d384529808f930161cbf005963ad482c22a
When running in my Titan V GPU

=======float 32 mode, ========
time python main.py
real 0m31.326s
user 1m24.282s
sys 0m19.782s
=======float 16 mode==========
time python main.py --fp16
real 0m34.736s
user 1m23.025s
sys 0m21.134s
The float16 code is actually slower. What a surprise.
The docker image I am using is
nvcr.io/nvidia/pytorch 18.05-py3
@csarofeen @nvidia
Qustion:
1. Does pytorch 0.4 compile half math into Volta tensorcore float16*float16 operation?
2. Why the official nvidia mixed training document is not writing down any performance number at all?
| https://github.com/pytorch/examples/issues/373 | closed | [
"question"
] | 2018-06-13T18:22:29Z | 2022-03-10T04:11:59Z | 1 | qigtang |
pytorch/examples | 372 | SNLI Why config.d_out is 4 in snli/train.py? | In SNLI, the unknown label in answers is removed. It becomes a 3-way classification, i.e., entailment, neutral, and contradiction. But why the config.d_out is assigned to 4 in line 39 in [snli/train.py](https://github.com/pytorch/examples/blob/f83508117b1ba9b752b227de992799093af3b215/snli/train.py#L39)? | https://github.com/pytorch/examples/issues/372 | closed | [] | 2018-06-12T11:21:10Z | 2020-02-19T06:43:35Z | 0 | shaoxiongji |
pytorch/text | 335 | where is the documentation? | https://github.com/pytorch/text/issues/335 | closed | [] | 2018-06-05T08:24:53Z | 2018-06-06T11:05:01Z | null | udion | |
pytorch/examples | 368 | Is it a right implement for rnn model? | I find a implement of rnn model,but the "forward" is not the normal format,there are there parameters for "forward" function.I wonder is it a right implement of rnn model?
the link:https://github.com/zhangxu0307/time_series_forecasting_pytorch/blob/master/code/model.py | https://github.com/pytorch/examples/issues/368 | closed | [] | 2018-06-04T10:15:28Z | 2018-06-04T16:20:42Z | 1 | lxj0276 |
pytorch/examples | 367 | TransformerNet no longer works in pytorch 0.4 | Is there anything that can be done to fix this?
When I call it I receive:
Traceback (most recent call last):
File "neural_style.py", line 651, in <module>
main()
File "neural_style.py", line 645, in main
stylize(args)
File "neural_style.py", line 437, in stylize
style_model.load_state_dict(torch.load(modX))
File "D:\Vitrual.C.Drive\Anaconda\envs\Pytorch\lib\site-packages\torch\nn\modules\module.py", line 721, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for TransformerNet:
Unexpected running stats buffer(s) "in1.running_mean" and "in1.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
Unexpected running stats buffer(s) "in2.running_mean" and "in2.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
Unexpected running stats buffer(s) "in3.running_mean" and "in3.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
Unexpected running stats buffer(s) "res1.in1.running_mean" and "res1.in1.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
Unexpected running stats buffer(s) "res1.in2.running_mean" and "res1.in2.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
Unexpected running stats buffer(s) "res2.in1.running_mean" and "res2.in1.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
Unexpected running stats buffer(s) "res2.in2.running_mean" and "res2.in2.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
Unexpected running stats buffer(s) "res3.in1.running_mean" and "res3.in1.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.
Unexpected running stats buffer(s) "res3.in2.running_mean" and "res3.in2.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to e | https://github.com/pytorch/examples/issues/367 | closed | [] | 2018-06-03T22:24:45Z | 2022-11-25T21:38:19Z | 2 | Zekodon |
pytorch/examples | 362 | InceptionV3 cannot work! | `python main.py -a inception_v3 ./imagenet/cat2dog --batch-size 16 --print-freq 1 --pretrained;`
=> using pre-trained model 'inception_v3'
Traceback (most recent call last):
File "main.py", line 314, in <module>
main()
File "main.py", line 157, in main
train(train_loader, model, criterion, optimizer, epoch)
File "main.py", line 189, in train
target = target.cuda(non_blocking=True)
TypeError: _cuda() got an unexpected keyword argument 'non_blocking'
| https://github.com/pytorch/examples/issues/362 | open | [
"help wanted",
"vision"
] | 2018-05-27T21:15:55Z | 2022-03-10T06:02:49Z | 8 | happsky |
pytorch/examples | 357 | language model generator question | In this file:
https://github.com/pytorch/examples/blob/master/word_language_model/generate.py
What does this input mean in the generation?
input = torch.randint(ntokens, (1, 1), dtype=torch.long).to(device)
As I understand it in a rnn-based language model, the last output of the rnn is fed into the current input and the sequence is unrolled. What is the meaning of this random input? Does it enforce the last output is being fed into the current input in the unrolling?
Thanks!
(I am building a sequence generator that needs to consume its output from the last input, and I am wondering how to do it. Are you suggesting just feeding in random input would also work? Any hints would be helpful ! ) | https://github.com/pytorch/examples/issues/357 | open | [
"triaged"
] | 2018-05-18T22:37:47Z | 2022-03-10T00:29:50Z | 2 | evanthebouncy |
pytorch/examples | 355 | Imagenet training example - RandomResizedCrop | This is regarding
https://github.com/pytorch/examples/blob/master/imagenet/main.py#L122
The default scale argument for the transform RandomResizedCrop is defined as scale=(0.08, 1.0) - defined in pytorch/vision/transform
RandomResizedCrop is doing a crop first and then scale to the desired size. What could be the logic in in setting the lower limit of crop to as low as 0.08? 0.08 would corresponds to a very small portion of the image.
I have seen (in my limited experimentation) that this is the reason for very slow training on ImageNet classification.
If we just change it to scale=(0.5, 1.0), then it trains fine. 0.75 would roughly correspond to what is commonly used area ratio of (224x224)/(256x256). Since this scale is a random range, and we want the middle to be around 0.75, scale=(0.5, 1.0) is a good choice.
The change can be done by passing scale argument to RandomResizedCrop transform.
transforms.Compose([
transforms.RandomResizedCrop(224, scale=(0.5, 1.0)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
]))
Does this make sense? I have to admit that I have done only limited experimentation with this.
| https://github.com/pytorch/examples/issues/355 | closed | [] | 2018-05-16T21:40:38Z | 2018-06-05T13:24:36Z | 1 | mathmanu |
pytorch/examples | 347 | fast_neural_style using cuda | 0.4.0
Cuda 9.0
cudnn 7.1
python3.5
I am trying to train a new model using cuda.
I am getting a RuntimeError
```
Traceback (most recent call last):
File "neural_style/neural_style.py", line 239, in <module>
main()
File "neural_style/neural_style.py", line 233, in main
train(args)
File "neural_style/neural_style.py", line 78, in train
features_x = vgg(x)
File "/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/dell/sbull/onnx/examples/fast_neural_style/neural_style/vgg.py", line 28, in forward
h = self.slice1(X)
File "/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/dell/sbull/onnx/env/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'
```
I have spent some time trying to figure it out what to fix, but without luck.
It seems like vgg16 for x needs to be set to run using cuda, like y, but I cannot figure out how to get that to work.
| https://github.com/pytorch/examples/issues/347 | closed | [] | 2018-05-03T19:17:57Z | 2018-05-15T20:39:02Z | 2 | spencerbull |
pytorch/ELF | 6 | What is the winrate for the Leela Zero rematch how is it coming along? | https://github.com/gcp/leela-zero/issues/1311#issuecomment-386156687 | https://github.com/pytorch/ELF/issues/6 | closed | [] | 2018-05-03T03:21:42Z | 2018-05-03T15:09:12Z | null | bochen2027 |
pytorch/vision | 484 | What is the relationship between the output label of pretrained model in model zoo and wordnet synset id? | we can easily access pytorch pre-trained model like VGG, AlexNet and SqueezeNet by
import torchvision
torchvision.models.vgg16(pretrained=True)
can anyone point out what's the relationship between the output label(index of maximum output value) and the actual category?
i downloaded ILSVRC2012_devkit_t12 and got the imagenet id and other metainfo provided by meta.mat, however it seems pre-trained model have some different id. because when i evaluate the network with ILSVRC2012 validation set, it reports 100% error. | https://github.com/pytorch/vision/issues/484 | open | [
"enhancement"
] | 2018-05-02T07:23:46Z | 2019-06-10T10:06:57Z | null | imkzh |
pytorch/tutorials | 226 | how to get turtorial for pytorch-0.3.1 | the site http://pytorch.org/tutorials/ is only for pytorch-0.4.0 now
how to get the earlier version of tutorials | https://github.com/pytorch/tutorials/issues/226 | closed | [] | 2018-04-25T04:18:33Z | 2018-04-27T11:06:26Z | 1 | HarryRuiTse |
pytorch/pytorch | 6,486 | Where is the Caffe2 website? | The gh-pages branch doesn't exist. | https://github.com/pytorch/pytorch/issues/6486 | closed | [] | 2018-04-10T21:54:41Z | 2018-04-10T21:58:08Z | null | louisabraham |
pytorch/examples | 330 | Use pretrained word embeddings | I want to use my pretrained word embeddings to train this model. How do I go about implementing it?
Thanks! | https://github.com/pytorch/examples/issues/330 | closed | [
"question"
] | 2018-04-10T18:07:59Z | 2022-03-10T03:43:27Z | 3 | BordiaS |
pytorch/pytorch | 6,468 | BatchNorm2d when batch size 1 works, what is it doing? | `BatchNorm2d` works even when batch size is 1, which puzzles me. So what is it doing when batch size is 1? The only related thread I could find is https://github.com/pytorch/pytorch/issues/1381 without much explanation.
minimal example:
```
x = Variable(torch.randn(1,2,3,3))
m = nn.BatchNorm2d(2)
y = m(x)
``` | https://github.com/pytorch/pytorch/issues/6468 | closed | [] | 2018-04-10T15:09:39Z | 2018-04-10T16:04:25Z | null | chanshing |
pytorch/examples | 327 | Absence of seed for result reproduction | Hello,
When running ImageNet with different resnet architectures (18,152..) l'm not able to reproduce the results. There is a small variation in accuracy.
https://github.com/pytorch/examples/blob/master/imagenet/main.py
What is wrong ?
even by making in
```
main() :
seed=15
torch.manual_seed(seed)
np.random.seed(seed)
```
l don't get the same result.
Thank you for your consideration
| https://github.com/pytorch/examples/issues/327 | closed | [] | 2018-04-09T13:40:23Z | 2022-03-10T03:40:23Z | 1 | pinkfloyd06 |
pytorch/examples | 326 | [Super resolution] image Resizing &low psnr value result | https://github.com/pytorch/examples/blob/dcdabc22b305d2f2989c6f03570dfcd3919e8a5b/super_resolution/data.py#L41
I think resizing LANCZOS interpolation is better than default BILINEAR
`Resize(crop_size // upscale_factor,interpolation=Image.LANCZOS)`
__How does downsampling work in a normal SR?__
And In the Set5 dataset, I found that the psnr value is lower than the bicubic method.
Why..? | https://github.com/pytorch/examples/issues/326 | open | [
"vision"
] | 2018-04-08T13:17:50Z | 2022-03-10T03:44:41Z | 8 | ryujaehun |
pytorch/tutorials | 221 | epub format support | Is it possible to provide an epub format of the tutorials officially ?
I have tried to build by `make epub`,
but it took too much time and I never finishd it. | https://github.com/pytorch/tutorials/issues/221 | closed | [] | 2018-04-05T13:11:36Z | 2018-04-27T11:08:18Z | 3 | zmlcc |
pytorch/tutorials | 218 | Char-RNN tutorial giving Error. | I was running the code for Char level RNN in the PyTorch docs, found here: http://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html .
I got the error:
```
Traceback (most recent call last):
File "names.py", line 86, in <module>
rnn = RNN(n_letters, n_hidden, n_categories)
File "names.py", line 72, in __init__
self.i2o = nn.Linear(input_size + hidden_size, output_size)
File "/home/ayush99/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 46, in __init__
self.reset_parameters()
File "/home/ayush99/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 49, in reset_parameters
stdv = 1. / math.sqrt(self.weight.size(1))
RuntimeError: invalid argument 2: dimension 1 out of range of 0D tensor at /opt/conda/conda-bld/pytorch-cpu_1518282373170/work/torch/lib/TH/generic/THTensor.c:24
```
System specs: I was running this on the CPU.
Why is this happening? The examples from the docs should work just fine. | https://github.com/pytorch/tutorials/issues/218 | closed | [] | 2018-03-25T15:37:52Z | 2021-06-16T21:33:27Z | 1 | ayush1999 |
pytorch/tutorials | 216 | The code snippets in How to create custom C extension has something wrong IMHO. | In the official tutorial about [how to create custom C extension](http://pytorch.org/tutorials/advanced/c_extension.html) page, I think there are still minor problems. First, in the src/my_lib.c file, here is the code snippets,
```
int my_lib_add_backward(THFloatTensor *grad_output, THFloatTensor *grad_input)
{
THFloatTensor_resizeAs(grad_input, grad_output);
THFloatTensor_fill(grad_input, 1);
return 1;
}
```
the statement `THFloatTensor_fill(grad_input, 1)` in the function `my_lib_add_backward` isn't correct enough in my opinion, because I think in the backward function, given the gradient w.r.t the output, you should return that gradient w.r.t the input, so grad_input should be the same as grad_output rather than filled with 1 only,
What's more, in the step2, at the backward method of Function MyAddFunction, there should return 2 grad_input, because there are 2 inputs in the corresponding forward method. Below is the related class definition.
```
class MyAddFunction(Function):
def forward(self, input1, input2):
output = torch.FloatTensor()
my_lib.my_lib_add_forward(input1, input2, output)
return output
def backward(self, grad_output):
grad_input = torch.FloatTensor()
my_lib.my_lib_add_backward(grad_output, grad_input)
return grad_input
```
Hoping for explanation or modification in order not to confuse the newbies who reads this page. | https://github.com/pytorch/tutorials/issues/216 | closed | [] | 2018-03-24T14:53:42Z | 2018-05-19T18:00:54Z | 1 | sonack |
pytorch/examples | 317 | How to understand this way of declaring a class? | `class Linear(Bottle, nn.Linear):
pass`
(in snli/model.py line 16)
I'm new user of torch. I get confused about this statement. Can someone help me?
| https://github.com/pytorch/examples/issues/317 | closed | [] | 2018-03-17T08:24:56Z | 2018-03-17T14:25:02Z | 1 | jueliangguke |
pytorch/pytorch | 5,833 | [Doc Bug] where is classmethod torch.nn.Embedding.from_pretrained? | There is a method to initialize Embedding from pretrained data (torch.Tensor).
http://pytorch.org/docs/master/nn.html
However that method does not exist in pytorch 0.3.1 .
If it was deprecated, what should I do to load pretrained word vectors such as torchtext.vocab.GloVe?
```python
import torch as th
emb = th.nn.Embedding(10, 20)
emb.weight[1] = 0 # ERROR
emb.weight.requires_grad = False
emb.weight[1] = 0 # still ERROR
```
error message:
```
RuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 2 objects sharing it
``` | https://github.com/pytorch/pytorch/issues/5833 | closed | [] | 2018-03-16T13:03:21Z | 2018-03-16T13:19:43Z | null | cdluminate |
pytorch/examples | 316 | Imagenet datasets | How to get validation images of ImageNet dataset | https://github.com/pytorch/examples/issues/316 | closed | [] | 2018-03-16T08:31:52Z | 2018-11-07T17:33:11Z | 2 | 22wei22 |
pytorch/examples | 312 | Doc comment on `accuracy` method in imagenet example, incorrect? | I'm confused with the doc comment for the `accuracy` function in the imagenet example:
```python
def accuracy(output, target, topk=(1,)):
"""Computes the precision@k for the specified values of k"""
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res
```
https://github.com/pytorch/examples/blob/master/imagenet/main.py#L298-L299
This seems like it computes accuracy and not precision as false positives are not accounted for. Should the doc comment read "Computes the accuracy@k for the specified values of k" or is my understanding of precision for object detection incorrect?
Many thanks for pytorch, it's a great library! | https://github.com/pytorch/examples/issues/312 | open | [
"good first issue"
] | 2018-02-27T12:02:28Z | 2022-03-10T03:09:32Z | 1 | willprice |
pytorch/examples | 308 | Clarification | https://github.com/pytorch/examples/blob/4ef2d4d0c8524372d0047e050065edcac665ce1a/vae/main.py#L61
Is there a particular reason why the method .exp_() is preferred to .exp() ? | https://github.com/pytorch/examples/issues/308 | closed | [] | 2018-02-23T12:07:39Z | 2018-12-13T06:45:41Z | 1 | ggbioing |
pytorch/examples | 304 | Is it possible to run snli: train.py on CPU (without CUDA)? | ```
$ conda list pytorch
# packages in environment at /Users/davidlaxer/anaconda:
#
pytorch 0.2.0 py27_4cu75 soumith
$ export NO_CUDA=0; python train.py
Traceback (most recent call last):
File "train.py", line 17, in <module>
torch.cuda.set_device(args.gpu)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/torch/cuda/__init__.py", line 162, in set_device
torch._C._cuda_setDevice(device)
AttributeError: 'module' object has no attribute '_cuda_setDevice'
```
| https://github.com/pytorch/examples/issues/304 | closed | [] | 2018-02-10T19:38:49Z | 2022-04-07T18:19:14Z | 3 | dbl001 |
pytorch/examples | 298 | Reversed Sign? | https://github.com/pytorch/examples/blob/963f7d1777cd20af3be30df40633356ba82a6b0c/vae/main.py#L105
Aren't we trying to maximize that and hence there needs to be a negative sign here? | https://github.com/pytorch/examples/issues/298 | closed | [] | 2018-02-03T18:14:11Z | 2018-02-07T11:29:35Z | 2 | whamza15 |
pytorch/examples | 286 | Batching in Word Level Language Model | Hi,
It is not clear how does the batching happen in the Language model?
It is not clear if it the input to the model in every iteration of the loop is [seq_length, batch_size, embed_size] or [batch_size, seq_length, embed_size]?
Also, why does rnn model return output and hidden separately, they are the same... as for a rnn layer hidden itself is the output.
Thanks for the awesome library. | https://github.com/pytorch/examples/issues/286 | closed | [] | 2018-01-15T16:44:11Z | 2018-01-17T03:32:51Z | 7 | mourinhoxyz |
pytorch/examples | 280 | Needs updating for PyTorch HEAD (no_grad) | volatile is no more in PyTorch HEAD, which means that you have to use the `no_grad` context manager now. Any examples using volatile need to be ported accordingly. However, we shouldn't do this until the next release, because examples should work for the current release. (If someone wants to get the jump, maybe a dev branch is warranted.)
CC @colesbury
| https://github.com/pytorch/examples/issues/280 | open | [
"help wanted"
] | 2018-01-09T18:58:27Z | 2022-03-10T05:54:42Z | 2 | ezyang |
pytorch/examples | 278 | Is total variation loss necessary in fast_neural_style? | I notice that there is no total variation loss regularization implemented in the example of `fast_neural_style`. But the paper declared it and their torch version use it. I'm wondering if total variation loss is necessary or not in style transfer. | https://github.com/pytorch/examples/issues/278 | open | [
"question",
"good first issue"
] | 2018-01-03T08:26:09Z | 2022-03-10T05:55:02Z | 0 | ZhuFengdaaa |
pytorch/examples | 277 | ValueError: optimizer got an empty parameter list | Hi PyTorch Friends,
I'm trying to building customized layer by following the guide [Extending PyTorch Tutorial](http://pytorch.org/docs/master/notes/extending.html) and use the customized layers to replace the nn.Conv2d and nn.Linear layer in the official example of [mnist main.py](https://github.com/pytorch/examples/blob/master/mnist/main.py) line 55-59.
However, after replacing with my own customized layers, the testing step (forward) is working without error, while training the new model, it gives an error as "ValueError: optimizer got an empty parameter list". Also, the new_model.parameters() does not have any items.
The following is my modified Net (nn.Module)
class Decomp_Net(nn.Module):
def __init__(self, path_pretrained_model="mymodel.pth"):
super(Decomp_Net, self).__init__()
# Load the pretrained model
# Load the saved weights
self.path_pretrained_model = path_pretrained_model
try:
params = torch.load(self.path_pretrained_model)
print("Loaded pretrained model.")
except:
raise("No pretrained model saved.")
# Conv Layer 1
self.W_conv1 = params.items()[0]
self.B_conv1 = params.items()[1][1]
self.W_conv1 = self.W_conv1[1].view(10, 25)
self.W_conv1 = self.W_conv1.t()
self.D_conv1, self.X_a_conv1 = create_dic_fuc.create_dic(A=self.W_conv1, M=25, N=10, Lmax=9, Epsilon=0.7, mode=1)
# Conv Layer 2
self.W_conv2 = params.items()[2]
self.B_conv2 = params.items()[3][1]
self.W_conv2 = self.W_conv2[1].view(200, 25)
self.W_conv2 = self.W_conv2.t()
self.D_conv2, self.X_a_conv2 = create_dic_fuc.create_dic(A=self.W_conv2, M=25, N=200, Lmax=199, Epsilon=0.7, mode=1)
# Layer FC1
self.W_fc1 = params.items()[4]
self.B_fc1 = params.items()[5][1]
self.D_fc1, self.X_a_fc1 = create_dic_fuc.create_dic(A=self.W_fc1[1], M=50, N=320, Lmax=319, Epsilon=0.8, mode=1)
# Layer FC2
self.W_fc2 = params.items()[6] # Feching the last fully connect layer of the orinal model
self.B_fc2 = params.items()[7][1]
self.D_fc2, self.X_a_fc2 = create_dic_fuc.create_dic(A=self.W_fc2[1], M=10, N=50, Lmax=49, Epsilon=0.5, mode=1)
self.conv1 = ConvDecomp2d(coefs=self.X_a_conv1, dictionary=self.D_conv1, bias_val=self.B_conv1, input_channels=1, output_channels=10, kernel_size=5, bias=True)
self.conv2 = ConvDecomp2d(coefs=self.X_a_conv2, dictionary=self.D_conv2, bias_val=self.B_conv2, input_channels=10, output_channels=20, kernel_size=5, bias=True)
self.conv2_drop = nn.Dropout2d()
self.fc1 = FCDecomp(coefs=self.X_a_fc1, dictionary=self.D_fc1, bias_val=self.B_fc1, input_features=320, output_features=50)
self.fc2 = FCDecomp(coefs=self.X_a_fc2, dictionary=self.D_fc2, bias_val=self.B_fc2, input_features=50, output_features=10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
I defined the customized function as follows:
class LinearDecomp(Function):
# Note that both forward and backward are @staticmethods
@staticmethod
def forward(ctx, input, coefs, dictionary, bias=None):
weight = torch.mm(dictionary, coefs).cuda() # reconstruct the weight
ctx.save_for_backward(input, weight, dictionary, coefs, bias)
output = input.mm(weight.t())
if bias is not None:
output += bias.unsqueeze(0).expand_as(output)
return output
# This function has only a single output, so it gets only one gradient
@staticmethod
def backward(ctx, grad_output):
input, weight, coefs, dictionary, bias = ctx.saved_variables
grad_input = grad_input = grad_coefs = grad_bias = None
grad_weight = grad_output.t().mm(input) # do not output
if ctx.needs_input_grad[0]:
grad_input = grad_output.mm(weight)
# if ctx.needs_input_grad[1]:
grad_weight = grad_output.t().mm(input) # do not output grad_weight
if ctx.needs_input_grad[2]:
grad_coefs = dictionary.t().mm(grad_weight)
if ctx.needs_input_grad[3]:
grad_dictionary = grad_weight.t().mm(grad_coefs.t())
if bias is not None and ctx.needs_input_grad[4]:
grad_bias = grad_output.sum(0).squeeze(0)
return grad_input, grad_coefs, grad_d | https://github.com/pytorch/examples/issues/277 | closed | [] | 2018-01-03T04:35:54Z | 2018-03-05T10:06:12Z | 1 | OpenBanboo |
pytorch/examples | 271 | Transfer Learning on DC-GAN | Are the models for the generator and discriminator trained on LSUN or imagenet dataset made public?. If they are made public, where can I download them from? | https://github.com/pytorch/examples/issues/271 | closed | [
"question"
] | 2017-12-19T06:26:09Z | 2022-03-10T02:41:26Z | 1 | brijml |
pytorch/tutorials | 189 | Tutorial about torch.distributions ? | https://github.com/pytorch/tutorials/issues/189 | closed | [] | 2017-12-18T15:57:51Z | 2021-06-16T21:41:33Z | 3 | zuoxingdong | |
huggingface/neuralcoref | 10 | what is the training data for this project? | is it the same to clark and manning paper? | https://github.com/huggingface/neuralcoref/issues/10 | closed | [] | 2017-12-04T22:16:52Z | 2017-12-19T01:40:18Z | null | xinyadu |
pytorch/tutorials | 176 | [Request] Tutorial on testing and improving data loading | Hi, I think pytorch is a great framework and I'm using it consistently in my work. As a self-taught in machine learning I have sometimes difficulties to understand how to solve some bottlenecks in training, for example slow I/O. I get the idea, but I lack a general view of the topic.
I think it would be nice to have something similar to [this](https://github.com/kzuiderveld/deeplearning1/blob/master/Improving%20training%20speeds%20using%20Keras%202.ipynb) and expanding it by explaining some common problems, how to catch them and some actions to solve them. | https://github.com/pytorch/tutorials/issues/176 | closed | [] | 2017-11-14T12:39:26Z | 2018-01-22T05:34:20Z | 1 | iacolippo |
pytorch/examples | 253 | Error For imagenet/main.py training with DistributedDataParallel(). | I got DistributedDataParallel() error.
I just fixed calling init_process_group() to pass rank like the below
dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, rank = args.rank,
world_size=args.world_size)
$ CUDA_VISIBLE_DEVICES=0 python main.py /dataset/imagenet_classify/ --world-size 2 --dist-backend gloo --dist-url tcp://127.0.0.1:23456 --rank 0
$ CUDA_VISIBLE_DEVICES=1 python main.py /dataset/imagenet_classify/ --world-size 2 --dist-backend gloo --dist-url tcp://127.0.0.1:23456 --rank 1
Error Message
=> creating model 'resnet18'
Traceback (most recent call last):
File "distributed_imagenet_main.py", line 319, in <module>
main()
File "distributed_imagenet_main.py", line 92, in main
model = torch.nn.parallel.DistributedDataParallel(model)
File "/home/andrew/ml/local/lib/python2.7/site-packages/torch/nn/parallel/distributed.py", line 124, in __init__
for param_tuple in zip(*map(lambda m: m.parameters(), self._module_copies)):
File "/home/andrew/ml/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 262, in __getattr__
type(self).__name__, name))
AttributeError: 'DistributedDataParallel' object has no attribute '_module_copies'
terminate called after throwing an instance of 'gloo::EnforceNotMet'
what(): [enforce fail at /pytorch/torch/lib/gloo/gloo/cuda.cu:249] error == cudaSuccess. 29 vs 0. Error at: /pytorch/torch/lib/gloo/gloo/cuda.cu:249: driver shutting down
Aborted (core dumped)
What is the problem?
Is there any guild document for training ImageNet with distributed nodes? | https://github.com/pytorch/examples/issues/253 | closed | [] | 2017-11-10T06:50:22Z | 2018-12-11T07:49:24Z | 2 | andrew-yang0722 |
pytorch/examples | 252 | mnist dataset(jpg format) load slow | I put different label of Mnist datasets in different folders, as is shown in attached figure.



I found dataset loading is very slow compared to official example, my script is also attached!
[mnist-example.txt](https://github.com/pytorch/examples/files/1459883/mnist-example.txt)
my data load time and official example load time log


| https://github.com/pytorch/examples/issues/252 | closed | [
"question"
] | 2017-11-10T02:35:47Z | 2022-03-10T02:20:15Z | 3 | Darknesszlx |
pytorch/examples | 248 | UserWarning: RNN module weights are not part... | Hello, on World LM model I get this user warning,
do not know what is means, so I am posting it here just to let you know.
`python3 main.py --cuda --epochs 6`
```
UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greately increasing memory usage. To compact weights again call flatten_parameters().
output, hidden = self.rnn(emb, hidden)
```
| https://github.com/pytorch/examples/issues/248 | closed | [] | 2017-10-29T09:32:06Z | 2018-12-03T13:59:27Z | 7 | Robomate |
pytorch/examples | 241 | Any example for Domain Adaptation? | Domain Adaptation is an interesting area at present. If any example of Domain adaptation in pytorch is available, it would be really helpful. For an example, Deep CORAL paper is developed using caffe. The code of Deep CORAL is:
https://github.com/VisionLearningGroup/CORAL
If this code would be available in pytorch, it would be really great. | https://github.com/pytorch/examples/issues/241 | closed | [] | 2017-10-24T23:09:31Z | 2022-03-10T02:26:00Z | 1 | redhat12345 |
pytorch/examples | 240 | error in vae? | In vae/main.py, line 61, shoudn't `std = logvar.mul(0.5).exp_()` be `std = logvar.exp_().pow(0.5)`?
sorry, I just realized...
| https://github.com/pytorch/examples/issues/240 | closed | [] | 2017-10-24T14:14:25Z | 2017-10-24T16:01:59Z | 0 | fedecarne |
pytorch/tutorials | 156 | Explain optimizer.zero_grad() | I think the call to [optimizer.zero_grad()](https://github.com/pytorch/tutorials/blob/master/beginner_source/examples_nn/two_layer_net_optim.py#L52) should be explained in the beginner tutorials. In particular:
* What is the point of this call?
* Why is not it made automatically?
Thanks! | https://github.com/pytorch/tutorials/issues/156 | closed | [] | 2017-10-11T12:32:06Z | 2018-01-22T08:22:20Z | 0 | Vayel |
pytorch/examples | 231 | As for the pretrained model in torchvision, what's the image channel RGB or BGR? | https://github.com/pytorch/examples/issues/231 | closed | [] | 2017-10-10T13:48:00Z | 2017-10-12T00:53:17Z | 2 | AlexHex7 | |
pytorch/tutorials | 147 | No module named 'torch.onnx' when following super_resolution_with_caffe2.html | I am following tutorial http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html (Transfering a model from PyTorch to Caffe2 and Mobile using ONNX). At the beginning I get:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-cabf174890ab> in <module>()
5 from torch.autograd import Variable
6 import torch.utils.model_zoo as model_zoo
----> 7 import torch.onnx
ModuleNotFoundError: No module named 'torch.onnx'
My environment is: Ubuntu 16.04, anaconda 4.3.25. my environment has Python 3.6. PyTorch 0.20. onnx 0.1, torchvision 0.1.9.
Please let me know what is missing, Thanks,
| https://github.com/pytorch/tutorials/issues/147 | closed | [] | 2017-09-28T20:17:10Z | 2017-11-08T12:58:48Z | 4 | liqunfu |
pytorch/text | 125 | what is the purpose of this project? | pytorch has offered utils.data.dataset, and what is the purpose of torchtext?
what features do torchtext support? | https://github.com/pytorch/text/issues/125 | closed | [] | 2017-09-19T05:44:43Z | 2017-12-22T07:00:38Z | null | rabintang |
pytorch/pytorch | 2,557 | What is the Torch7 's nn.Add layer in PyTorch? | I find the torch.legacy.nn.Add layer, but it doesn't support autograd. Any other solutions? | https://github.com/pytorch/pytorch/issues/2557 | closed | [] | 2017-08-29T02:32:49Z | 2017-08-29T02:40:14Z | null | yytdfc |
pytorch/examples | 207 | how to finetune my own trained model on new datasets? | I have trained my own model ,now i want use this trained model to initialize my new networks or finetune this trained model on new datasets, anyone know how to do it ? | https://github.com/pytorch/examples/issues/207 | closed | [] | 2017-08-24T09:01:19Z | 2017-08-24T09:53:38Z | 0 | visonpon |
pytorch/tutorials | 123 | Neural style transfer question | Hi, not sure if this is the right place to ask questions, but I'm working through the neural style transfer tutorial and am confused about something.
What is the purpose of the `backward` method in `ContentLoss` and `StyleLoss`?
If we remove the `backward` method, won't this work as well for the `closure` function in `run_style_transfer`?
```python
def closure():
# correct the values of updated input image
input_param.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_param)
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.data[0], content_score.data[0]))
print()
total_score = style_score+content_score
total_score.backward()
return total_score
```
On a related note, won't multiple `backward` calls in the original code accumulate the gradients for the image? Why is it okay to do this? Am I wrong in assuming that you should only call `backward` once? I'm new to Pytorch so I apologize if I'm missing anything fundamental. Thanks!
EDIT: Tagging the author @alexis-jacq if you don't mind :) | https://github.com/pytorch/tutorials/issues/123 | closed | [] | 2017-08-10T17:48:30Z | 2017-08-12T14:07:01Z | 2 | reiinakano |
pytorch/pytorch | 2,247 | what is exactly batch_size in pytorch? | Sorry im new to this.
I am not sure if I understand right. in pytorch it says: batch_size (int, optional) – how many samples per batch to load (default: 1).
I know that, batch size = the number of training examples in one forward/backward pass.
What does it mean that it says "how many **samples** per **batch** to load". can you define sample and batch here for me please.
Also, what would be the maximum number for batch_size?
Thanks | https://github.com/pytorch/pytorch/issues/2247 | closed | [] | 2017-07-30T04:38:06Z | 2017-07-31T07:38:59Z | null | isalirezag |
pytorch/pytorch | 2,227 | where is the torch.nn.NLLLoss ? | i want to find how NLLLoss calcuate the loss, but i can't find its code.
# loss
def nll_loss(input, target, weight=None, size_average=True, ignore_index=-100):
r"""The negative log likelihood loss.
See :class:`~torch.nn.NLLLoss` for details.
where is `~torch.nn.NLLLoss` ? | https://github.com/pytorch/pytorch/issues/2227 | closed | [] | 2017-07-28T08:35:34Z | 2022-07-26T18:28:32Z | null | susht3 |
pytorch/examples | 187 | fast-neural-style uses mscoco but normalizes for imagenet mean | Documentation for `fast-neural-style` uses mscoco training dataset, but subtracts imagenet mean from image input data.
The effects are probably very minor, but anybody have the mean stats for mscoco? | https://github.com/pytorch/examples/issues/187 | closed | [] | 2017-07-21T09:18:40Z | 2017-07-24T01:20:02Z | 1 | twairball |
pytorch/tutorials | 116 | How to save the model in Classifying name tutorial? | I am 100% successfully run the tutorial and I make some problem change, where I fixed the sequence to 10 and just 3 feature. It almost same with the tutorial. I have successfully save the model, but I have problem when loading it.
```
import torch.nn as nn
from torch.autograd import Variable
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax()
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def init_hidden(self):
return Variable(torch.zeros(1, self.hidden_size))`
```
I saved the model using this code. I put it in the end of training.
`torch.save(rnn.state_dict(),'./halo.pkl')`
The network is still same. Here is the code to load the model.
```
def restore_net(filename):
n_hidden = 128
n_letters = 3
n_categories = 2
rnn = RNN(n_letters, n_hidden, n_categories)
rnn.load_state_dict(filename)
return rnn
```
However I got this error.

Anyone can have a suggestion how should I save it?
-Thank you- | https://github.com/pytorch/tutorials/issues/116 | closed | [] | 2017-07-12T11:02:16Z | 2017-07-12T11:07:13Z | 1 | herleeyandi |
pytorch/examples | 178 | ImageNet Error | Hi,
I am trying to train the models on ImageNet following [this](https://github.com/pytorch/examples/tree/master/imagenet#training). However, I got no luck.
Does anyone know how to fix the following issue?
```shell
kwang@cdc-177:~/PyTorch/examples/imagenet$ CUDA_VISIBLE_DEVICES=1 python main.py -a resnet18 /imagenet_dir
=> creating model 'resnet18'
Traceback (most recent call last):
File "main.py", line 289, in <module>
main()
File "main.py", line 131, in main
train(train_loader, model, criterion, optimizer, epoch)
File "main.py", line 159, in train
for i, (input, target) in enumerate(train_loader):
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 201, in __next__
return self._process_next_batch(batch)
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
AttributeError: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 40, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py", line 116, in __getitem__
img = self.loader(path)
File "build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py", line 63, in default_loader
return pil_loader(path)
File "build/bdist.linux-x86_64/egg/torchvision/datasets/folder.py", line 45, in pil_loader
with Image.open(f) as img:
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 528, in __getattr__
raise AttributeError(name)
AttributeError: __exit__
```
`PyTorch` is okay and I can run some other experiments with it.
Thanks! | https://github.com/pytorch/examples/issues/178 | closed | [] | 2017-07-07T07:03:45Z | 2017-07-08T02:50:51Z | 1 | wk910930 |
pytorch/examples | 173 | imagenet example did not transfer input to gpu? | In the imagenet training code, `input` is not explicitly converted to cuda in these [lines](https://github.com/pytorch/examples/blob/master/imagenet/main.py#L163-L165).
I've noticed that the training loader has `pin_memory` flag as True. In fact, even if a tensor has called `pin_memory()`, it is still a `FloatTensor` instead of `cuda.FloatTensor`. If I understand the [documentation](http://pytorch.org/docs/master/notes/cuda.html) correctly, the benefit of using `pin_memory()` is that you can use `async=True` in the `cuda()` method, which would be faster due to asynchronous.
If I did not miss anything, this is a bug in the code, right? | https://github.com/pytorch/examples/issues/173 | closed | [] | 2017-06-30T03:12:38Z | 2018-03-16T08:35:16Z | 2 | iammarvelous |
pytorch/tutorials | 101 | Regarding exercises in Character-Level RNN | I was wondering where I can find the dataset for the exercises given in Classifying Names with Character-Level RNN.
For example:
Any word -> language
First name -> gender
Character name -> writer
Page title -> blog or subreddit
To complete this task, do I have to create my own dataset or is there any repo where I can download those datasets? | https://github.com/pytorch/tutorials/issues/101 | closed | [] | 2017-06-26T21:36:40Z | 2018-01-22T04:55:21Z | 1 | oya163 |
pytorch/examples | 170 | Potential speedup for DCGAN | In the dcgan example, while training the discriminator, why is backward called twice ? First its called on the real images, then the fake images.
Instead, shouldn't doing something like:
`totalError = real_loss + fake_loss ,
and then calling totalError.backward() `
save one whole backprop ?
Does doing it the way i suggested change anything qualitatively ? | https://github.com/pytorch/examples/issues/170 | closed | [] | 2017-06-16T05:47:31Z | 2017-10-04T15:02:47Z | 8 | harveyslash |
pytorch/tutorials | 98 | update beginner tutorial to most recent pytorch version? | This [beginner tutorial](http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py) uses `y.grad_fn` where, from googling around it seems like it should now use `y.creator`. The image is updated, but the text/code isn't.
Regardless, the tutorial should probably say what version of PyTorch it's for and how to check, right?
I'm happy to make the modifications and do a pull request, but wasn't sure what kind of solution was desired. | https://github.com/pytorch/tutorials/issues/98 | closed | [] | 2017-06-15T02:50:43Z | 2017-06-15T19:48:14Z | 3 | erindb |
pytorch/examples | 168 | Regarding dimensions of mean and variance | Its a multivariate normal distribution in latent space and input space so mean(mu) and variance should be in multidimensional form(matrix) per distribution but your code is generating single value of mean and variance per distribution. So what is the math or implementation process behind it? | https://github.com/pytorch/examples/issues/168 | closed | [] | 2017-06-09T10:31:23Z | 2017-10-01T22:51:56Z | 1 | anindyasarkarIITH |
pytorch/examples | 166 | why input data is not copied to CUDA memory during training (only target) ? | in ImageNet example why only target is copied to CUDA memory target.cuda(async=True) and the absence of input.cuda() in training phase? | https://github.com/pytorch/examples/issues/166 | closed | [] | 2017-06-07T08:48:38Z | 2017-06-07T09:52:22Z | 1 | chahrazaddo |
pytorch/tutorials | 94 | blog tutorial and slides | Couldn't find you on twitter so raising this here.
I wrote a beginner's first steps blog and a presentation for the pydata london monthly meetup:
- [https://goo.gl/EmSfNk](https://goo.gl/EmSfNk)
- [http://makeyourownneuralnetwork.blogspot.co.uk/2017/05/learning-mnist-with-gpu-acceleration.html](http://makeyourownneuralnetwork.blogspot.co.uk/2017/05/learning-mnist-with-gpu-acceleration.html)
Perhaps these could be the basis for a very beginner-friendly gentle introduction to PyTorch and it's concepts?
Myself I couldn't find beginner-friendly guides with a logical progression, The existing tutorials are not really for complete (but intelligent or interested) beginners.
How do I help? | https://github.com/pytorch/tutorials/issues/94 | closed | [] | 2017-06-01T12:54:46Z | 2017-07-05T17:28:08Z | 1 | makeyourownneuralnetwork |
pytorch/examples | 163 | super_resolution model building question | class Net(nn.Module):
def __init__(self, upscale_factor):
super(Net, self).__init__()
self.relu = nn.ReLU()
self.conv1 = nn.Conv2d(1, 64, 5, 1, 2)
self.conv2 = nn.Conv2d(64, 64, 3, 1, 1)
self.conv3 = nn.Conv2d(64, 32, 3, 1, 1)
self.conv4 = nn.Conv2d(32, upscale_factor ** 2, 3, 1, 1)
self.pixel_shuffle = nn.PixelShuffle(upscale_factor)
how could u get the following information from the paper,
in the self.conv1, there is padding = 2
layer num l in paper is 3 ,why do u add self.conv2,?
self.conv4 's output_channel is upscale_factor**2
| https://github.com/pytorch/examples/issues/163 | closed | [
"question"
] | 2017-05-30T12:48:11Z | 2022-03-10T01:56:57Z | 1 | pageedward |
pytorch/examples | 162 | Request for examples on Recurrent Highway Networks (RHN) | Is it possible to use the existing torch.nn modules and implement RHNs? Would it make sense to have RHN as a separate module in torch.nn?
For reference, someone did raise this issue in pytorch/pytorch https://github.com/pytorch/pytorch/issues/516 | https://github.com/pytorch/examples/issues/162 | closed | [] | 2017-05-30T05:49:20Z | 2022-03-10T01:56:13Z | 2 | sanyam5 |
pytorch/tutorials | 89 | is the grad value wrong in beginner_source/blitz/autograd_tutorial.py line 92? | in line 92: `z_i = 3(x_i+2)^2` and `z_i\bigr\rvert_{x_i=1} = 27`.
I think `z_i\bigr\rvert_{x_i=1} = 6(x_i+2)\rvert_{x_i=1} = 6*(1+2) = 18`, please correct me if I am wrong, otherwise i will submit a pull request.
| https://github.com/pytorch/tutorials/issues/89 | closed | [] | 2017-05-25T23:59:47Z | 2017-05-29T17:02:53Z | 2 | ningzhou |
pytorch/examples | 158 | Shapes in SNLI | Looking over the SNLI example, something seems off to me. I hope I'm just missing something. First, a batch is embedded and, from the docs, I understand that Embedding layers output the shape `(N, W, D)` where N is the batch size and W is the sequence length. This is passed to the Encoder where it extracts the batch_size with `batch_size = inputs.size()[1]`. Wouldn't that give you the W and not N? Also, the inputs are passed as-is to the LSTM, which expects the shape `(W, N, D)`, but no reshaping is ever done. It seems like the Encoder is assuming `(W, N, D)` data from the start but there is never any `view` done on the embed to change the order of the dimensions, right? | https://github.com/pytorch/examples/issues/158 | closed | [
"question",
"nlp"
] | 2017-05-07T02:32:11Z | 2022-03-10T03:19:09Z | 2 | neverfox |
pytorch/examples | 157 | two lines of code in mnist/main.py | There are two arguments called batch_size and test_batch_size:
`parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')`
`parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')`
but batch_size is used here:
`test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)`
Also, what does this line(line 105) do:
`test_loss = test_loss`
and it seems that `epoch` is not used in test(). | https://github.com/pytorch/examples/issues/157 | closed | [] | 2017-05-04T07:40:05Z | 2020-10-10T02:22:56Z | 0 | iamabug |
pytorch/tutorials | 77 | Slowdown in DQN RL Tutorial | After about 5 episodes on latest master build of Pytorch, the time to execute each step t in the main loop slows way down. I tried a pip install of Pytorch as well to test if it was just my version and same thing. I am on OSX with no cuda. Is slowdown normal? I don't see anything in the optimization step that should really slow this down over time. Didn't know if this could be gym related as well.
If this isn't normal I will try to dig in some more and see what is causing this for me.
Thanks | https://github.com/pytorch/tutorials/issues/77 | closed | [] | 2017-04-26T21:58:41Z | 2018-01-22T04:54:10Z | 1 | lbollar |
pytorch/pytorch | 1,344 | What the function is about element-wise product(Hadamard product) in pytorch? | https://github.com/pytorch/pytorch/issues/1344 | closed | [] | 2017-04-24T10:59:08Z | 2017-04-24T13:07:50Z | null | stevenhanjun | |
pytorch/examples | 147 | imagenet example training gets slower over time. | It seems that as I do training, the per batch time gets slower and slower.
For example, when I run `CUDA_VISIBLE_DEVICES=0 python main.py -a alexnet --lr 0.01 --workers 22 /ssd/cv_datasets/ILSVRC2015/Data/CLS-LOC`.
Initially I get an average per batch time of about 0.25s
After several batches, I get 0.5s.
I `top` and find that most of memory (128GB) is occupied
How to fix this? | https://github.com/pytorch/examples/issues/147 | closed | [] | 2017-04-20T19:27:35Z | 2019-05-03T09:09:49Z | 10 | zym1010 |
pytorch/examples | 144 | why treating Alexnet/VGG differently in ImageNet example? | in <https://github.com/pytorch/examples/blob/master/imagenet/main.py#L68-L72>, it seems that special care has to be taken when wrapping the module with `DataParallel`. Why is this the case? Also, I don't understand why for AlexNet and VGG, `features` is wrapped, yet `classifier` is not. | https://github.com/pytorch/examples/issues/144 | closed | [] | 2017-04-16T04:26:33Z | 2020-01-08T00:27:23Z | 6 | zym1010 |
pytorch/examples | 142 | action.reinforce(reward) | What does "action.reinforce(reward)" mean? Does it means gradient descent?

| https://github.com/pytorch/examples/issues/142 | closed | [] | 2017-04-14T07:35:47Z | 2017-04-14T11:54:32Z | 1 | susht3 |
pytorch/examples | 137 | How To Correctly Kill MultiProcesses During Multi-GPU Training | During the training of using examples/imagenet/main.py, I used the following command:
CUDA_VISIBLE_DEVICES=0,1,2,3 nohup python main.py [options] path/to/imagenetdir 1>a.log 2>a.err &
Then it starts 5 processes in the system, 1 main process appears in nvidia-smi.
Most of the Time (90% of the time) after I first kill the main process, GPU usage down to 0% so I can kill the other 4 to release GPU Mem to start a new training task. Sometimes (10% of the time), after I killed these 5 processes, the main process remained to be "python [defunct]" that cannot be killed even by sudo kill -s 9. The usage of GPU AND the GPU mem are not released.
Multi-gpu training happened at where I use the following line in my code:
model = torch.nn.DataParallel(model).cuda()
Please give some hint on "how to correctly kill multi-gpu training pytorch process[es]."
Thanks. | https://github.com/pytorch/examples/issues/137 | closed | [] | 2017-04-10T07:36:38Z | 2022-03-09T21:27:41Z | 1 | catalystfrank |
pytorch/examples | 126 | ImageNet example is falling apart in multiple ways | I am experimenting with Soumith's ImageNet example, but it is crashing or deadlocking in three different ways. I have added a bunch of "print" statements to it to figure out where it is crashing, and here is the GIST of full script: (as you can see, there are almost no significant modifications to the original code.) All code is running on 2x NVidia Titan X 12 GB cards with 96 GB RAM.
https://gist.github.com/FuriouslyCurious/81742b8126f07f919522a588147e6086
## Issue 1: transforms.Scale(512) fails in THCTensorMathBlas.cu:241
How to reproduce:
1. Images are being fed with transforms.Scale(512) or transforms.Scale(1024)
2. Source images are 2048x2048.
3. Workers >= 1
4. Batchsize >= 2
5. Script will crash on its own in few minutes
Output
```
python train.py -a resnet18 -j 1 -b 2 /home/FC/data/P/
=> Parsing complete...
=> creating model 'resnet18'
=> Using CUDA DataParallel
=> Starting training images loading...
=> Starting validation images loading...
=> Loss criterion and optimizer setup
=> Starting training...
=> Training Epoch 0
Traceback (most recent call last):
File "train.py", line 299, in <module>
main()
File "train.py", line 140, in main
train(train_loader, model, criterion, optimizer, epoch)
File "train.py", line 177, in train
output = model(input_var)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py", line 202, in __call__
result = self.forward(*input, **kwargs)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 92, in forward
outputs = self.parallel_apply(replicas, scattered, gpu_dicts)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 102, in parallel_apply
return parallel_apply(replicas, inputs, kwargs)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 50, in parallel_apply
raise output
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 30, in _worker
output = module(*input, **kwargs)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py", line 202, in __call__
result = self.forward(*input, **kwargs)
File "/conda3/envs/idp/lib/python3.5/site-packages/torchvision-0.1.6-py3.5.egg/torchvision/models/resnet.py", line 150, in forward
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/module.py", line 202, in __call__
result = self.forward(*input, **kwargs)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/modules/linear.py", line 54, in forward
return self._backend.Linear()(input, self.weight, self.bias)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/nn/_functions/linear.py", line 10, in forward
output.addmm_(0, 1, input, weight.t())
RuntimeError: size mismatch at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488757768560/work/torch/lib/THC/generic/THCTensorMathBlas.cu:241
```
## Issue 2: Multiple worker threads deadlock in index_queue.get() and waiter.acquire()
How to reproduce:
1. Images are being fed with default crop: transforms.RandomSizedCrop(224)
2. Source images are 2048x2048.
3. Workers > 2
4. Batchsize > 40
5. When you see GPU clock speed fall to resting MHz on NVidia-smi, script has deadlocked in waiter.acquire() and index_queue.get(). Abort the script manually.
```
python train.py -a resnet18 /home/FC/data/P
=> Parsing complete...
=> creating model 'resnet18'
=> Using CUDA DataParallel
=> Starting training images loading...
=> Starting validation images loading...
=> Loss criterion and optimizer setup
=> Starting training...
=> Training Epoch 0
^CProcess Process-4:
Process Process-3:
Traceback (most recent call last):
Traceback (most recent call last):
File "train.py", line 299, in <module>
main()
File "train.py", line 140, in main
train(train_loader, model, criterion, optimizer, epoch)
File "train.py", line 168, in train
for i, (input, target) in enumerate(train_loader):
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 168, in __next__
idx, batch = self.data_queue.get()
File "/conda3/envs/idp/lib/python3.5/queue.py", line 164, in get
self.not_empty.wait()
File "/conda3/envs/idp/lib/python3.5/threading.py", line 293, in wait
waiter.acquire()
Traceback (most recent call last):
File "/conda3/envs/idp/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/conda3/envs/idp/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 26, in _worker_loop
r = index_queue.get()
File "/conda3/envs/idp/lib/python3.5/multiprocessing/queues.py", line 342, in get
with self._rlock:
| https://github.com/pytorch/examples/issues/126 | closed | [] | 2017-03-28T01:07:36Z | 2017-03-28T01:08:39Z | 1 | FuriouslyCurious |
pytorch/examples | 116 | why is detach necessary | Hi, I am wondering why is detach necessary in this line:
https://github.com/pytorch/examples/blob/a60bd4e261afc091004ea3cf582d0ad3b2e01259/dcgan/main.py#L230
I understand that we want to update the gradients of netD without changin the ones of netG. But if the optimizer is only using the parameters of netD, then only its weight will be updated. Am I missing something here?
Thanks in advance!
| https://github.com/pytorch/examples/issues/116 | closed | [] | 2017-03-20T22:12:36Z | 2022-04-16T07:20:21Z | 17 | rogertrullo |
pytorch/tutorials | 47 | Web page for Tutorials | Hi,
I've been working on beautifying/integrating all the tutorials on pytorch into one. see https://github.com/pytorch/pytorch/pull/778. These tutorials are based on [sphinx-gallery](http://sphinx-gallery.readthedocs.io) and tutorials are executed during build time.
I've created a [separate repo](https://github.com/chsasank/pytorch-tutorials) for the tutorials and used gh-pages to host them: http://chsasank.github.io/pytorch-tutorials. I also added my own [transfer learning tutorial](https://chsasank.github.io/pytorch-tutorials/tutorials/transfer_learning_tutorial.html)
After a discussion with @soumith, he suggested we should host these tutorials at tutorials.pytorch.org. He also requested a change:
- [x] Categorize tutorials by level instead of source
If indeed this is to be front face of tutorials, we'll need to figure out
- [x] how to modify the repo
- [x] how to build the sources and host
For hosting, we shouldn't probably use github pages as it will mess up the git history with all the html files.
Since these tutorials are executed at build, we might need a decently powered build environment. Most tutorials take may be 5 min on my macbook air. Except [seq2seq](https://chsasank.github.io/pytorch-tutorials/practical-pytorch/seq2seq-translation-tutorial.html) tutorial, which took 40 min on CPU/25 min on GPU. Note that a tutorial is re-excuted only if changes are made to the tutorial file.
Thanks,
Sasank. | https://github.com/pytorch/tutorials/issues/47 | closed | [] | 2017-03-14T12:37:35Z | 2017-04-14T18:46:27Z | 13 | chsasank |
pytorch/tutorials | 44 | Where is Variable? | In `Reinforcement (Q-)Learning with PyTorch2`, the section `Training hyperparameters and utilities` claim the cell providing `Variable` which is "a simple wrapper around torch.autograd". But I can't found it in the cell. Then I encounter `NameError: name 'Variable' is not defined`, anyway I import Variable from `torch.autograd` instead. So where is Variable? Or how can I implement it by scratch? | https://github.com/pytorch/tutorials/issues/44 | closed | [] | 2017-03-06T04:57:01Z | 2019-12-02T12:42:11Z | null | yiyuezhuo |
pytorch/tutorials | 41 | Numerically unstable initialized values for uninitialized tensors? | I was trying to follow the tutorial when I noticed that if I just create an "uninitialized matrix", its values are not numerically stable. I guess since we will have to initialize the matrix later, it doesn't really matter, but I'm just wondering if this is intentional.
I'm running pyTorch with anaconda python 3.6, CUDA v8 on Linux.
```python
from __future__ import print_function
import torch
```
```python
x = torch.Tensor(5, 3)
```
```python
x = torch.rand(5, 3)
```
```python
x
```
-1.6775e+31 4.5895e-41 4.0929e-37
0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00
[torch.FloatTensor of size 5x3]
```python
y = torch.Tensor(5, 3); y
```
-1.6775e+31 4.5895e-41 4.2770e-37
0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00
0.0000e+00 0.0000e+00 0.0000e+00
[torch.FloatTensor of size 5x3]
```python
x * y
```
inf 0 0
0 0 0
0 0 0
0 0 0
0 0 0
[torch.FloatTensor of size 5x3]
| https://github.com/pytorch/tutorials/issues/41 | closed | [] | 2017-02-27T03:17:09Z | 2017-02-27T03:38:54Z | 1 | r-luo |
pytorch/tutorials | 26 | Training on GPU in deep learning notebook - inputs/labels need cuda() | In working through the deep learning notebook, it's not obvious at first how to get the learning working once you put the net on the GPU.
After some trial and error, this worked
inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda()
I could make a PR with this addition if desired | https://github.com/pytorch/tutorials/issues/26 | closed | [] | 2017-02-04T21:38:56Z | 2017-05-23T16:37:32Z | 3 | gojira |
pytorch/tutorials | 15 | There is any fine tune tutorials? | Fine tune is very easy in Torch and Caffe, but I can't find how do fine tune in pytorch. Is there any fine tune examples or tutorials? | https://github.com/pytorch/tutorials/issues/15 | closed | [] | 2017-01-22T09:06:53Z | 2017-10-31T07:24:56Z | 9 | Teaonly |
pytorch/tutorials | 14 | Potential improvement to 60 minute blitz for pasteability? | Hello! I'm very much a newbie to this:
https://github.com/pytorch/tutorials/blob/master/Deep%20Learning%20with%20PyTorch.ipynb
I followed this guide with Anaconda 3.5 and got to this point: `out = net(input)`
I got a NotImplementedError from the original nn module that the class was supposed to override.
Turns out I skipped the error messages I got in interactive python where the indentation was wrong (so forward function wasn't implemented in my `Net` class).
If we removed the spaces between the functions or used comments we could avoid the issue:
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5) # 1 input image channel, 6 output channels, 5x5 square convolution kernel
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120) # an affine operation: y = Wx + b
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv2(x)), 2) # If the size is a square you can only specify a single number
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
net
``` | https://github.com/pytorch/tutorials/issues/14 | closed | [] | 2017-01-22T02:29:52Z | 2017-01-22T03:05:36Z | 1 | youanden |
pytorch/tutorials | 7 | Feature Request: tutorial on loading datasets | A tutorial outlining how to make use of the `torch.utils.data.Dataset` and `torch.utils.data.DataLoader` on your own data (not just the `torchvision.datasets`) would be good. The documentation page is quite obscure, and it is not entirely clear how these can be made use of on your own data.
Also outlining what would be good practices for when your data is:
- A numpy array
- A folder full of image files
And if pytorch has built in functions for creating queues of data, for when the data is too big to all fit in memory in one go (eg in the case of a folder full of image files).
| https://github.com/pytorch/tutorials/issues/7 | closed | [
"enhancement"
] | 2017-01-19T11:08:21Z | 2023-05-26T20:43:34Z | 8 | ronrest |
pytorch/tutorials | 5 | Initialize with t7 files? | If I trained a model with Torch and stored the weights using t7 format. Is it possible to use this as initialization in pytorch? Thank you. | https://github.com/pytorch/tutorials/issues/5 | closed | [] | 2017-01-18T18:50:08Z | 2017-01-18T19:20:07Z | 2 | Yuliang-Zou |
vllm-project/vllm | 31,787 | [Usage]: How to set different attention backend for prefill and decode phases? | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Alibaba Cloud Linux 3 (Soaring Falcon) (x86_64)
GCC version : (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3.8 2.32)
Clang version : Could not collect
CMake version : version 3.31.2
Libc version : glibc-2.32
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.32
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.61
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version : 535.183.06
cuDNN version : Probably one of the following:
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn.so.9.7.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv.so.9.7.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn.so.9.7.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.7.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_graph.so.9.7.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.7.1
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops.so.9.7.1
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
字节序: Little Endian
CPU: 192
在线 CPU 列表: 0-191
每个核的线程数: 2
每个座的核数: 48
座: 2
NUMA 节点: 2
厂商 ID: GenuineIntel
CPU 系列: 6
型号: 143
型号名称: Intel(R) Xeon(R) Platinum 8469C
步进: 8
CPU MHz: 3100.000
CPU 最大 MHz: 3800.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 5200.00
虚拟化: VT-x
L1d 缓存: 48K
L1i 缓存: 32K
L2 缓存: 2048K
L3 缓存: 99840K
NUMA 节点0 CPU: 0-47,96-143
NUMA 节点1 CPU: 48-95,144-191
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.4.1
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.15.0
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-cutlass-dsl==4.2.1
[pip3] nvidia-ml-py==13.580.82
[pip3] nvidia-nccl-cu12==2.27.3
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pyzmq==27.1.0
[pip3] torch==2.8.0
[pip3] torch_memory_saver==0.0.9
[pip3] torchao==0.9.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.1
[pip3] triton==3.4.0
[conda] flashinfer-python 0. | https://github.com/vllm-project/vllm/issues/31787 | open | [
"usage"
] | 2026-01-06T07:33:18Z | 2026-01-06T07:33:18Z | 0 | stormchasingg |
sgl-project/sglang | 16,546 | [RFC] SGLang-Omni Design | API Design: @shuaills
Proposal Draft: @FrankLeeeee @sleepcoo
## Motivation
Recent models, no matter open-source or proprietary, have the tendency to become more multi-modal than ever before. That is, models have the ability to process data in more than two modalities. For example, Gemini can have inputs of text, image, video and audio and can output text, image and audio as well. In the open-source domain, Qwen-Omni can do something similar as well. In several openly held talks, researchers from tech giants have expressed their expectation of omni-style models in the coming year 2026. Therefore, the SGLang team thinks that it will be important to introduce new modules to accommodate these coming models.
## Background
An omni model is typically featured by multi-modal inputs and multi-modal outputs. An example of Qwen/Qwen2.5-Omni-7B is given below. The model can take text, audio and video as inputs and output text and audio.
<img width="1280" height="1195" alt="Image" src="https://github.com/user-attachments/assets/1ab6f1f5-4282-4944-a502-dd252459dc8b" />
## Design Considerations
### Stage Placement
Compared to LLM, one significant characteristic of omni-style model is that it has much more component models. For example, Qwen2.5-Omni has 6 components (2 encoders, thinker, talker, codec decoder). Thus, one particular challenge of omni model is how to place these components. Some questions can be raised when placing these models:
1. In what case we put all components in one process?
2. In what case we disaggregate the components?
3. How to support flexible placements?
4. How to support replicated replacement? For example, we want to host N instances of talker and M instances of thinkers for a single deployment and how should we do it?
### Data Flow Control
Omni models have more data flow paths compared to LLMs or diffusion models. For example, Qwen2.5-Omni can have 8 ways of using this model. This drastically increases the complexity for system design for this kind of model, espeically for scheduling.
Inputs | Outputs
-- | --
Text | Text
Text + Vision | Text
Text + Audio | Text
Text + Vision + Audio | Text
Text | Text + Audio
Text + Vision | Text + Audio
Text + Audio | Text + Audio
Text + Vision + Audio | Text + Audio
## Design Details
<img width="4428" height="4134" alt="Image" src="https://github.com/user-attachments/assets/7aea26b8-4bcc-45ef-a70a-2f1ac3e042f4" />
### Intra and Inter Disaggregation
When it comes to more than 1 component models, an intuitive thought is to place each stage on a distinct process which exclusively owns one or more independent GPUs. However, disaggregation can also occur within the stage, for example, we might place different encoders on different processes for the encoding stage, another example is PD disggregation in LLMs. Thus, we can simplify the design with inter- and intra-disaggregation and re-use the existing implementations of PD disaggregation in SGLang.
- Inter-Disaggregation: We split the entire model into multiple stages and each stage runs its own scheduling and execution logic. The tensors are communicated between stages via Mooncake or shared memory.
- Intra-Disaggregation: The model(s) in the same stage are split into multiple processes, e.g. PD Disaggregation. The implementation is not controlled by SGLang-Omni directly and it is only required for the stage to place their outputs into the message queue for the next stage to retrieve. In this way, the developer can customize their own way of intra-stage disaggregation and re-use some of the existing schemes.
### Multi-Scheduling
Each stage can have its own scheduling strategies, e.g. Continuous batching, static grouping, etc.
### Multi-Path
As omni models have various data flows, we need to group them by type first:
Type | Description | Example | How to handle it?
-- | -- | -- | --
Early End | The execution stops at an intermediate stage | when the qwen-omni model only outputs text, it does not need to go through the audio module. | We need to create a P2P connection from the all potential endings stages to the main process so that we can pass the data directly without going through unrequired stages.
Cyclic Flow | The data might be transfered to the previous stage | VibeVoice implements a cyclic dataflow where the diffusion head's output is fed back to the LLM for the next generation step, creating a continuous loop during inference. | We can specify the destination to the previous stage in object message queue
Multiple Receivers | A stage's output needs to be sent to multiple receiving stages. | Fun-Audio-Chat: During generation, the hidden states from the shared LLM layer are passed in parallel to a Text Head for text token prediction and a Speech Refined Head (SRH) to generate high-quality speech tokens at 25Hz resolution. | We can specify multiple destinations in object message queue
## Multi-instance
Due to the presence of multiple component models, it can be observed that eac | https://github.com/sgl-project/sglang/issues/16546 | open | [] | 2026-01-06T06:23:37Z | 2026-01-06T07:14:36Z | 0 | FrankLeeeee |
vllm-project/vllm | 31,766 | [Docs] Feedback for `/en/latest/contributing/profiling/` | ### 📚 The doc issue
When I follow this doc and run OpenAI Server[¶](https://docs.vllm.ai/en/latest/contributing/profiling/#openai-server), I found
> usage: vllm [-h] [-v] {chat,complete,serve,bench,collect-env,run-batch} ...
> vllm: error: unrecognized arguments: --profiler-config {"profiler": "torch", "torch_profiler_dir": "/workspace/vllm_profile"}
I want to know if this update in the newer version?
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/31766 | open | [
"documentation"
] | 2026-01-06T03:15:37Z | 2026-01-06T03:15:37Z | 0 | cyk2018 |
huggingface/tokenizers | 1,926 | [bug] Why is Apple's development for computers with Inter chips not supported in versions above 0.30.0 | Why is Apple's development for computers with Inter chips not supported in versions above 0.30.0? | https://github.com/huggingface/tokenizers/issues/1926 | open | [] | 2026-01-06T03:11:35Z | 2026-01-06T03:18:03Z | 1 | sustly |
sgl-project/sglang | 16,530 | [Bug] DecodingStage VRAM usage surges dramatically | ### Checklist
- [ ] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Describe the bug
Peak GPU memory: 21.18 GB, Remaining GPU memory at peak: 18.82 GB. Components that can stay resident: ['text_encoder', 'vae', 'transformer']
[01-06 02:01:47] Failed to generate output for prompt 1: CUDA out of memory. Tried to allocate 1.22 GiB. GPU 0 has a total capacity of 39.49 GiB of which 371.00 MiB is free. Including non-PyTorch memory, this process has 2.92 GiB memory in use. Process 35135 has 36.14 GiB memory in use. Of the allocated memory 2.44 GiB is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Traceback (most recent call last):
File "/sgl-workspace/sglang/python/sglang/multimodal_gen/runtime/utils/logging_utils.py", line 466, in log_generation_timer
yield timer
File "/sgl-workspace/sglang/python/sglang/multimodal_gen/runtime/entrypoints/diffusion_generator.py", line 231, in generate
frames = post_process_sample(
^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/multimodal_gen/runtime/entrypoints/utils.py", line 73, in post_process_sample
sample = (sample * 255).clamp(0, 255).to(torch.uint8)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.22 GiB. GPU 0 has a total capacity of 39.49 GiB of which 371.00 MiB is free. Including non-PyTorch memory, this process has 2.92 GiB memory in use. Process 35135 has 36.14 GiB memory in use. Of the allocated memory 2.44 GiB is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[01-06 02:01:47] Completed batch processing. Generated 0 outputs in 375.74 seconds.
[01-06 02:01:47] Generator was garbage collected without being shut down. Attempting to shut down the local server and client.
/usr/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
### Reproduction
sglang generate --model-path /data/models/Wan2.2-TI2V-5B-Diffusers --text-encoder-precisions bf16 --dit-precision bf16 --vae-precision fp32 --dit-cpu-offload --vae-cpu-offload --text-encoder-cpu-offload --image-encoder-cpu-offload --pin-cpu-memory --num-gpus 1 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." --num-frames 121 --fps 24 --num-inference-steps 50 --save-output --output-path output --output-file-name wan_ti2v.mp4 --dit-layerwise-offload
### Environment
Python: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]
CUDA available: True
GPU 0,1,2,3: NVIDIA A100-PCIE-40GB
GPU 0,1,2,3 Compute Capability: 8.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.86
CUDA Driver Version: 590.44.01
PyTorch: 2.9.1+cu129
sglang: 0.5.7
sgl_kernel: 0.3.20
flashinfer_python: 0.5.3
flashinfer_cubin: 0.5.3
flashinfer_jit_cache: 0.5.3+cu129
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.4.0
aiohttp: 3.13.2
fastapi: 0.128.0
hf_transfer: 0.1.9
huggingface_hub: 0.36.0
interegular: 0.3.3
modelscope: 1.33.0
orjson: 3.11.5
outlines: 0.1.11
packaging: 25.0
psutil: 7.2.1
pydantic: 2.12.5
python-multipart: 0.0.21
pyzmq: 27.1.0
uvicorn: 0.40.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.75.0
litellm: Module Not Found
decord2: 3.0.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX SYS SYS NODE NODE PIX PIX SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU1 PIX X SYS SYS NODE NODE PIX PIX SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU2 SYS SYS X PIX SYS SYS SYS SYS PIX PIX NODE NODE 28-55,84-111 1 N/A
GPU3 SYS SYS PIX X SYS SYS SYS SYS PIX PIX NODE NODE 28-55,84-111 1 N/A
NIC0 NODE NODE SYS SYS X PIX NODE NODE SYS SYS SYS SYS
NIC1 NODE NODE SYS SYS PIX X NODE NODE SYS SYS SYS SYS
NIC2 PIX PIX SYS SYS NODE NODE X PIX SYS SYS SYS SYS
NIC3 PIX PIX SYS SYS NODE NODE PIX X SYS SYS SYS SYS
NIC4 SYS SYS PIX PIX SYS SYS SYS SYS X PIX NODE NODE
NIC5 SYS SYS PIX PIX SYS SYS SYS SYS | https://github.com/sgl-project/sglang/issues/16530 | open | [] | 2026-01-06T02:15:16Z | 2026-01-06T02:15:16Z | 0 | carloszhang999 |
huggingface/lerobot | 2,753 | Debugging poor eval with SmoVLA and two cameras. | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
- Lerobot running on a Jetson Orin nano Super
- Model trained on a 4090
- SO-ARM-101 model.
- two cameras setup (wrist and top view)
```
### Description
I just trained a 30K steps SmoVLA model from a 73 episodes dataset (which are a 2 merged datasets I had). These two datasets were used the same SO-ARM-101 with two set of cameras (wrist and top).
I downloaded from HF the model and after a couple of hiccups because of the missing third camera I made it run on my Jetson Orin Nano Super (the machine I'm using for the robot, the training is on my 4090).
But the arm just moved a centimeter and then kept idle.
I'm trying to debug what could have caused this:
It was because I'm running on my Jetson and SMOLVLA is too much for this little board? (I don't think so, but maybe?)
Maybe merging the datasets created more noise than helped? (the datasets were recorded in different times of the day)
the fact that I only have two cameras and had to remap the cameras and create a dummy third camera for the third camera parameter might have confused the model?
anyone has any insight to give? Thanks in advance!
### Context & Reproduction
collected datasets (two datasets)
merged datasets into one and uploaded to HF
trained a model based on smovla-base (had to create a dummy camera for the third camera)
run on the jetson orin the trained model.
### Relevant logs or stack trace
```Shell
```
### Checklist
- [x] I have searched existing tickets to ensure this isn't a duplicate.
- [x] I am using the latest version of the `main` branch.
- [x] I have verified this is not an environment-specific problem.
### Additional Info / Workarounds
_No response_ | https://github.com/huggingface/lerobot/issues/2753 | open | [
"question",
"policies",
"dataset",
"sensors",
"training",
"evaluation"
] | 2026-01-05T18:25:13Z | 2026-01-05T18:25:27Z | null | vettorazi |
vllm-project/vllm | 31,726 | [Usage]: Why does `vllm serve` keep filling up my system disk when loading a model from a network mount? |
### Your current environment
```
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 3.22.1
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.10.134-18.0.5.lifsea8.x86_64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.4.131
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version : 560.35.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8469C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd ida arat hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95
NUMA node1 CPU(s): 96-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] botorch==0.8.5
[pip3] flashinfer-py | https://github.com/vllm-project/vllm/issues/31726 | open | [
"usage"
] | 2026-01-05T14:50:19Z | 2026-01-05T15:30:39Z | 5 | tingjun-cs |
huggingface/diffusers | 12,913 | Is Lumina2Pipeline's mu calculation correct? | ### Describe the bug
Description
While reviewing the current main-branch implementation of pipeline_lumina2, I noticed a potential bug in the calculation of mu within the pipeline's __call__.
In the following section of the code:
https://github.com/huggingface/diffusers/blob/5ffb65803d0ddc5e3298c35df638ceed5e580922/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L484-L503
The latent tensor appears to have the shape:
(batch_size, num_channels_latents, height, width)
However, later in the same file:
https://github.com/huggingface/diffusers/blob/5ffb65803d0ddc5e3298c35df638ceed5e580922/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L699-L706
the value latent.shape[1] (i.e., num_channels_latents) is passed as the argument for image_seq_len when computing mu.
This seems incorrect, since image_seq_len should represent the number of image tokens or sequence length, not the number of latent channels.
Expected Behavior
image_seq_len should likely correspond to the number of spatial tokens derived from (height, width) (or another tokenization step), rather than the number of latent channels.
Actual Behavior
The current implementation uses latent.shape[1] as image_seq_len, which likely leads to unintended behavior in the computation of mu and subsequent sampling steps.
Suggested Fix
Review the logic where image_seq_len is passed, and ensure it reflects the correct sequence length dimension (possibly derived from spatial resolution or token count, rather than channel count).
### Reproduction
At the moment, I don’t have a copy/paste runnable MRE because this was identified via manual logic review rather than reproducing the behavior in a runtime environment.
### Logs
```shell
```
### System Info
Diffusers==0.36.0
Python==3.13
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/12913 | open | [
"bug"
] | 2026-01-05T14:30:01Z | 2026-01-05T18:07:36Z | 1 | hwangdonghyun |
vllm-project/vllm | 31,689 | [Feature][Quantization][Help Wanted]: Clean up GPTQ + AWQ Quantization | ### 🚀 The feature, motivation and pitch
We are in process of cleaning up the quantization integrations in vllm (see the FusedMoE refactor PRs I am working on)
In general, this means we are trying to separate concerns of the quantization INTEGRATION (on disk format --- responsible for weight loading) from the quantization KERNEL (runtime format --- responsible for executing at runtime).
For GPTQ/AWQ, we have tech debt in that we have different quantization integrations (`gptq.py`, `gptq_marlin.py`, `awq.py`, `awq_marlin.py`, `wna16.py`, `cpuwna16.py`) and we use the `override_quantization_method` to select between them during initialization. This is generally hard to follow and is not adhereing to the abstractions we have in vllm.
Currently, some (but not all) quantization schemes follow the proper abstractions, where we have a full separating of concerns. Examples are:
- [Fp8Moe](https://github.com/vllm-project/vllm/blob/b53b89fdb3f4a857eabee5091187cfa937502711/vllm/model_executor/layers/quantization/fp8.py#L722) which follows the proper structure to run a variety of different kernels hooked up to fp8 models
- [CompressedTensorsWNA16](https://github.com/vllm-project/vllm/blob/b53b89fdb3f4a857eabee5091187cfa937502711/vllm/model_executor/layers/quantization/compressed_tensors/schemes/compressed_tensors_wNa16.py) which follows the proper structure to run a variety of different kernels hooked up to wna16 models
We need to apply this to gptq and awq.
> WARNING: this is a significant undertaking and will be scrutinized heavily for code quality. The PR author should reach out to @robertgshaw2-redhat in slack to discuss design and on-going progress during the PR creation.
Thanks in advance for any help!!!
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/31689 | open | [
"help wanted",
"feature request"
] | 2026-01-04T20:56:04Z | 2026-01-06T04:42:19Z | 7 | robertgshaw2-redhat |
vllm-project/vllm | 31,683 | [Feature]: Error Logging Redesign | ### 🚀 The feature, motivation and pitch
vLLM has a multiprocess architecture with:
- API Server --> EngineCore --> [N] Workers
As a result, clean error message logging is challenging, since the error in the API server that occurs will often not be the root cause error. An example of this is at startup time:
```
(vllm) [robertgshaw2-redhat@nm-automation-h100-standalone-1-preserve vllm]$ just launch_cutlass_tensor
VLLM_USE_DEEP_GEMM=0 VLLM_USE_FLASHINFER_MOE_FP8=1 VLLM_FLASHINFER_MOE_BACKEND=throughput chg run --gpus 2 -- vllm serve amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV -tp 2 --port 8002 --max-model-len 8192
Reserved 2 GPU(s): [1 3] for command execution
(APIServer pid=116718) INFO 01-04 14:48:03 [api_server.py:1277] vLLM API server version 0.13.0rc2.dev185+g00a8d7628
(APIServer pid=116718) INFO 01-04 14:48:03 [utils.py:253] non-default args: {'model_tag': 'amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV', 'port': 8002, 'model': 'amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV', 'max_model_len': 8192, 'tensor_parallel_size': 2}
(APIServer pid=116718) INFO 01-04 14:48:04 [model.py:522] Resolved architecture: MixtralForCausalLM
(APIServer pid=116718) INFO 01-04 14:48:04 [model.py:1510] Using max model len 8192
(APIServer pid=116718) WARNING 01-04 14:48:04 [vllm.py:1453] Current vLLM config is not set.
(APIServer pid=116718) INFO 01-04 14:48:04 [scheduler.py:231] Chunked prefill is enabled with max_num_batched_tokens=2048.
(APIServer pid=116718) INFO 01-04 14:48:04 [vllm.py:635] Disabling NCCL for DP synchronization when using async scheduling.
(APIServer pid=116718) INFO 01-04 14:48:04 [vllm.py:640] Asynchronous scheduling is enabled.
(APIServer pid=116718) INFO 01-04 14:48:05 [scheduler.py:231] Chunked prefill is enabled with max_num_batched_tokens=8192.
(EngineCore_DP0 pid=116936) INFO 01-04 14:48:12 [core.py:96] Initializing a V1 LLM engine (v0.13.0rc2.dev185+g00a8d7628) with config: model='amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV', speculative_config=None, tokenizer='amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False), seed=0, served_model_name=amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False}, 'local_cache_dir': None}
(EngineCore_DP0 pid=116936) WARNING 01-04 14:48:12 [multiproc_executor.py:882] Reducing Torch parallelism from 80 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 01-04 14:48:20 [parallel_state.py:1214] world_size=2 | https://github.com/vllm-project/vllm/issues/31683 | open | [
"help wanted",
"feature request"
] | 2026-01-04T14:53:38Z | 2026-01-04T14:53:43Z | 0 | robertgshaw2-redhat |
sgl-project/sglang | 16,362 | [Bug] Deepseekv3.2 detect eos when reasonging | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Describe the bug
When making reasoning requests under the deepseekv3.2 model, it was found that randomly, only the reasoning content appears, while both the context and function call contents are empty. The probability of this happening is about 1/5. My request expects a function call to be returned.
During debugging, it was discovered that an EOS was detected during the reasoning phase. Is there a convenient way to replace the EOS with </think>?
### Reproduction
/
### Environment
/ | https://github.com/sgl-project/sglang/issues/16362 | open | [] | 2026-01-04T02:43:14Z | 2026-01-04T02:43:14Z | 0 | duzeyan |
vllm-project/vllm | 31,646 | [Usage]: How can I use GPU12 as standalone KV LMCache? | ### Your current environment
```text
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.8.12-13-pve-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.61
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
GPU 8: NVIDIA GeForce RTX 3090
GPU 9: NVIDIA GeForce RTX 3090
GPU 10: NVIDIA GeForce RTX 3090
GPU 11: NVIDIA GeForce RTX 3090
GPU 12: NVIDIA GeForce RTX 3090
Nvidia driver version : 570.172.08
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-9,11,13-24,26-50,52-63
Off-line CPU(s) list: 10,12,25,51
Vendor ID: AuthenticAMD
BIOS Vendor ID: Advanced Micro Devices, Inc.
Model name: AMD EPYC 7532 32-Core Processor
BIOS Model name: AMD EPYC 7532 32-Core Processor Unknown CPU @ 2.4GHz
BIOS CPU family: 107
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 120%
CPU max MHz: 2400.0000
CPU min MHz: 1500.0000
BogoMIPS: 4799.61
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: | https://github.com/vllm-project/vllm/issues/31646 | open | [
"usage"
] | 2026-01-03T13:25:41Z | 2026-01-03T13:25:41Z | 0 | joshuakoh1 |
vllm-project/vllm | 31,624 | [Bug]: ModelOpt Llama-4 Checkpoints Take 5+ minutes to load | ### 🚀 The feature, motivation and pitch
In working on some MoE refactors, I discovered that L4 for ModelOpt takes 5+minutes to load weights even from CPU page cache.
- https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-FP8
The root cause is basically this hack logic to load the state dict that ModelOpt uses
- https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama4.py#L439-L523 [modelopt is the fused case]
What happens is that the CPU tensor (loaded weight) that we are going to load into the GPU tensor (param) becomes non-contiguous due to this logic. As a result, when we eventually call `_copy()` from CPU->GPU we are calling this on a non-contiguous cpu tensor which takes 3-4s per weight.
To hack around this for local R&D, I simply immediately move the loaded_weight to the GPU. This makes the gather happen on the GPU which accelerates things a lot. This isn't reasonable as an actual solution though
We should investigate where the logic in the weight loader can avoid creating non-contiguous CPU tensors
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/31624 | open | [
"bug",
"help wanted",
"good first issue",
"feature request"
] | 2026-01-02T15:18:14Z | 2026-01-06T02:42:32Z | 6 | robertgshaw2-redhat |
huggingface/lerobot | 2,741 | XVLA: Clarification on provided lerobot/xvla-base model checkpoint and documentation | ### Ticket Type
❓ Technical Question
### Environment & System Info
```Shell
```
### Description
Dear lerobot-Team,
I hope you had a good start into 2026 and thanks for the great work on making X-VLA natively available via lerobot.
I have a few questions regarding the _lerobot/xvla-base_ checkpoint and the information provided in the [documentation](https://huggingface.co/docs/lerobot/en/xvla#-base-model) about it:
1. You write in the documentation that the checkpoint has been trained with a two-stage approach:
> A 0.9B parameter instantiation of X-VLA, trained with a carefully designed data processing and learning recipe. The training pipeline consists of two phases:
Phase I: Pretraining - Pretrained on 290K episodes from Droid, Robomind, and Agibot, spanning seven platforms across five types of robotic arms (single-arm to bi-manual setups). By leveraging soft prompts to absorb embodiment-specific variations, the model learns an embodiment-agnostic generalist policy.
Phase II: Domain Adaptation - Adapted to deployable policies for target domains. A new set of soft prompts is introduced and optimized to encode the hardware configuration of the novel domain, while the pretrained backbone remains frozen.
I was now wondering whether _lerobot/xvla-base_ has really been trained with domain adaptation already or whether it has only been pre-trained as described in the X-VLA paper, i.e. with 290k trajectories of DROID, Robomind etc. If this is the case, it might be clearer to update the documentation to remove Phase II to avoid confusion. If _lerobot/xvla-base_ has really been trained on Domain Adaptation already, could you please explain why this was done for a base checkpoint and which datasets/ training hyperparams were chosen for this (this is not detailed in the paper).
2. You mention [here](https://huggingface.co/docs/lerobot/en/xvla#2-domain-ids) that _lerobot/xvla-base_ has been trained on the following domain_ids:
> <html><body>
<!--StartFragment-->
Dataset Name | Domain ID
-- | --
Bridge | 0
RT1 | 1
Calvin | 2
libero | 3
widowx-air | 4
AIR-AGILEX-HQ | 5
robotwin2_abs_ee | 6
robotwin2_clean | 6
robocasa-human | 7
VLABench | 8
AGIBOT-challenge | 9
AIR-AGILEX | 10
AIRBOT | 18
<!--EndFragment-->
</body>
</html>
I was wondering whether this is correct because I expected _lerobot/xvla-base_ (as described in 1.) to have been pre-trained on DROID, RoboMind and Agibot. Based on the [original code base](https://github.com/2toinf/X-VLA/blob/main/datasets/domain_config.py), i would have expected that it was pretrained on the following domain_ids:
```
# pretraining
"robomind-franka": 11,
"robomind-ur": 12,
"Droid-Left": 13,
"Droid-Right": 14,
"AGIBOT": 15,
"robomind-agilex": 16,
"robomind-franka-dual": 17
```
Is it possible that in the documentation the pretraining and finetuning datasets/ domain ids got mixed up? Or is my understanding simply incorrect? If the pretraining and finetuning domain ids really got mixed up, would it make more sense to choose one of the pretraining domain ids (e.g. 13) when fine-tuning _lerobot/xvla_ with tasks collected on a setup very similar to DROID ?
Thank you very much for your response!
### Context & Reproduction
_No response_
### Relevant logs or stack trace
```Shell
```
### Checklist
- [ ] I have searched existing tickets to ensure this isn't a duplicate.
- [ ] I am using the latest version of the `main` branch.
- [ ] I have verified this is not an environment-specific problem.
### Additional Info / Workarounds
_No response_ | https://github.com/huggingface/lerobot/issues/2741 | open | [
"documentation",
"question",
"policies",
"dataset",
"training"
] | 2026-01-02T08:38:03Z | 2026-01-04T15:54:55Z | null | gianlucageraci |
huggingface/datasets | 7,927 | Using Stateful Dataloader with Split Dataset By Node and DCP for DDP | ### Describe the bug
I am trying to determine how to save and load the Stateful Dataloader State with DCP and Split Dataset by Node for DDP.
Currently, I am running into the issue where I am receiving a slow resume.
```
Neither dataset nor iter(dataset) defines state_dict/load_state_dict so we are naively fast-forwarding your dataset by 5000 steps. For more efficient resumes, please implement `state_dict` and `load_state_dict` in your IterableDataset and/or iterator.
```
### Steps to reproduce the bug
Say we have a streaming dataset:
```python
class StreamingDataset(IterableDataset):
def __init__(
self,
path: str,
tokenizer: AutoTokenizer,
name: Optional[str] = None,
split: str = "train",
max_length: int = 2048,
ddp_rank: int = 0,
ddp_world_size: int = 1,
):
dataset = load_dataset(path, name, split=split, streaming=True)
self.train_dataset = split_dataset_by_node(
dataset=dataset, rank=ddp_rank, world_size=ddp_world_size
)
self.tokenizer = tokenizer
self.max_length = max_length
def __iter__(self):
for sample in iter(self.train_dataset):
tokenized = self.tokenizer(
sample["text"],
padding="max_length",
truncation=True,
max_length=self.max_length,
return_special_tokens_mask=True,
)
yield tokenized
```
We load that dataset into the Stateful Dataloader:
```python
trainloader = StatefulDataLoader(
dataset=train_dataset,
batch_size=args.batch_size,
collate_fn=data_collator,
)
```
We then have code for checkpointing and resuming the state using DCP:
```python
import os
from typing import Optional
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch.distributed.checkpoint.format_utils import dcp_to_torch_save
from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict
from blitzbert.utils import print_rank_0
class Checkpoint:
def __init__(
self,
model: torch.nn.Module,
optimizer: torch.optim.Optimizer,
trainloader,
step: Optional[int] = None,
epoch: Optional[int] = None,
):
self.model = model
self.optimizer = optimizer
self.trainloader = trainloader
self.step = step
self.epoch = epoch
def get_state_dict(self) -> dict:
model_state_dict, optimizer_state_dict = get_state_dict(
self.model, self.optimizer
)
return {
"model": model_state_dict,
"optim": optimizer_state_dict,
"trainloader": self.trainloader.state_dict(),
"step": self.step,
"epoch": self.epoch,
}
def save_checkpoint(
args,
model,
optimizer,
trainloader,
step: Optional[int] = None,
epoch: Optional[int] = None,
final_checkpoint: bool = False,
):
checkpointer = Checkpoint(
model=model,
optimizer=optimizer,
trainloader=trainloader,
step=step,
epoch=epoch,
)
state_dict = checkpointer.get_state_dict()
if final_checkpoint:
print_rank_0("Saving final model")
save_path = os.path.join(args.checkpoint_dir, "final_model")
dcp.save(state_dict, checkpoint_id=save_path)
dist.barrier()
single_file_path = os.path.join(args.checkpoint_dir, "final_checkpoint.pth")
dcp_to_torch_save(save_path, single_file_path)
else:
if step % args.checkpointing_steps == 0 and step != 0:
print_rank_0(f"Saving model at step: {step}")
save_path = os.path.join(args.checkpoint_dir, f"epoch_{epoch}_step_{step}")
dcp.save(state_dict, checkpoint_id=save_path)
dist.barrier()
def load_checkpoint(args, model, optimizer, trainloader):
if not args.resume_from_checkpoint:
return 0, 0
checkpoint_path = args.resume_from_checkpoint
print_rank_0(f"Resumed from checkpoint: {checkpoint_path}")
checkpointer = Checkpoint(
model=model,
optimizer=optimizer,
trainloader=trainloader,
)
state_dict = checkpointer.get_state_dict()
dcp.load(
state_dict=state_dict,
checkpoint_id=checkpoint_path,
)
set_state_dict(
model,
optimizer,
model_state_dict=state_dict["model"],
optim_state_dict=state_dict["optim"],
)
trainloader.load_state_dict(state_dict["trainloader"])
step = state_dict["step"]
epoch = state_dict["epoch"]
return step, epoch
```
and then loading the checkpoint:
```python
completed_steps, current_epoch = load_checkpoint(
args=args, model=model, optimizer=optimizer, trainloader=trainloader
)
```
### Expected behavior
If I implement what the warning says:
```python
| https://github.com/huggingface/datasets/issues/7927 | open | [] | 2026-01-01T22:27:07Z | 2026-01-02T02:48:21Z | 2 | conceptofmind |
vllm-project/vllm | 31,609 | [Bug][ModelOpt]: FlashInfer CUTLASS MoE Accuracy Degraded (Llama4) | ### Your current environment
H100, B200 ---> vllm 0.13.0
### 🐛 Describe the bug
- running the following:
```bash
# modelopt
MODEL_TENSOR := "nvidia/Llama-4-Scout-17B-16E-Instruct-FP8"
GPUS := "2"
PORT := "8001"
# sm90 / sm100
launch_cutlass_tensor:
VLLM_USE_DEEP_GEMM=0 VLLM_USE_FLASHINFER_MOE_FP8=1 VLLM_FLASHINFER_MOE_BACKEND=throughput vllm serve {{MODEL_TENSOR}} -tp {{GPUS}} --port {{PORT}} --max-model-len 8192
# sm100
launch_trtllm_tensor:
VLLM_USE_DEEP_GEMM=0 VLLM_USE_FLASHINFER_MOE_FP8=1 VLLM_FLASHINFER_MOE_BACKEND=latency chg run --gpus {{GPUS}} -- vllm serve {{MODEL_TENSOR}} -tp {{GPUS}} --max-model-len 8192
eval_block:
lm_eval \
--model local-completions \
--tasks gsm8k \
--model_args "model={{MODEL_BLOCK}},base_url=http://localhost:{{PORT}}/v1/completions,num_concurrent=1000,tokenized_requests=False"
eval_tensor:
lm_eval \
--model local-completions \
--tasks gsm8k \
--model_args "model={{MODEL_TENSOR}},base_url=http://localhost:{{PORT}}/v1/completions,num_concurrent=1000,tokenized_requests=False"
```
with cutlass:
```bash
local-completions (model=nvidia/Llama-4-Scout-17B-16E-Instruct-FP8,base_url=http://localhost:8001/v1/completions,num_concurrent=1000,tokenized_requests=False), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.7491|± |0.0119|
| | |strict-match | 5|exact_match|↑ |0.7672|± |0.0116|
```
with trtllm:
```bash
local-completions (model=nvidia/Llama-4-Scout-17B-16E-Instruct-FP8,base_url=http://localhost:8000/v1/completions,num_concurrent=1000,tokenized_requests=False), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.9242|± |0.0073|
| | |strict-match | 5|exact_match|↑ |0.9075|± |0.0080|
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/31609 | closed | [
"bug",
"help wanted"
] | 2026-01-01T21:45:48Z | 2026-01-03T20:26:38Z | 2 | robertgshaw2-redhat |
huggingface/trl | 4,766 | Asynchronous generation and training for GRPO? | ### Feature request
GRPOTrainer send requests for the next batch to vllm server when it is computing backpropagation, in order to reduce idle runtime for both server's GPUs and trainer's GPUs.
### Motivation
Under the current GRPO trainer, generation and backpropagation are sequential, meaning that lots of runtime are wasted. Considering that they are using different GPUs on server setting, it'd be beneficial to do generation at the same time when backpropagation is in computation. This requires the vllm trainer to send requests for next batch when running the current batch, and providing suggestion for the ratio of trainer / server GPU counts.
### Your contribution
Submit PR in the future. | https://github.com/huggingface/trl/issues/4766 | open | [] | 2026-01-01T08:42:12Z | 2026-01-01T08:42:12Z | 0 | sxndqc |
vllm-project/vllm | 31,574 | [Usage]: If vllm surpport load LoRA adapter and DeepSeek-v3.1-termunis at the same time | ### Your current environment
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 3.22.1
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.9.86
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA H20-3e
GPU 1: NVIDIA H20-3e
GPU 2: NVIDIA H20-3e
GPU 3: NVIDIA H20-3e
GPU 4: NVIDIA H20-3e
GPU 5: NVIDIA H20-3e
GPU 6: NVIDIA H20-3e
GPU 7: NVIDIA H20-3e
Nvidia driver version : 570.133.20
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.17.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.17.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.17.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.17.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.17.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.17.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.17.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.17.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: INTEL(R) XEON(R) PLATINUM 8575C
BIOS Model name: INTEL(R) XEON(R) PLATINUM 8575C
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 640 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user point | https://github.com/vllm-project/vllm/issues/31574 | open | [
"usage"
] | 2025-12-31T10:33:52Z | 2026-01-01T07:09:51Z | 1 | AIR-hl |
sgl-project/sglang | 16,220 | GLM pd disaggregation with mtp | did glm support pd disaggregation and mtp? i try to test,but the accept len in log is always 1(failed to predict everytime) and performance is bad.i use the start command below,is there something wrong?
args for prefill node :
SGLANG_ENABLE_SPEC_V2=1 SGLANG_DISAGGREGATION_QUEUE_SIZE=1 SGLANG_DISAGGREGATION_THREAD_POOL_SIZE=1 MC_TE_METRIC=1 SGLANG_SET_CPU_AFFINITY=true python -m sglang.launch_server --model /models/GLM-4.6-FP8/ --trust-remote-code --watchdog-timeout "1000000" --mem-fraction-static 0.8 --max-running-requests 40 --disaggregation-mode prefill --tp-size 8 --kv-cache-dtype fp8_e4m3 --host 0.0.0.0 --chunked-prefill-size 16384 --attention-backend fa3 --enable-metrics --disaggregation-ib-device mlx5_0 --page-size 64 --speculative-algorithm NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
args for decode node:
SGLANG_ENABLE_SPEC_V2=1 SGLANG_CLIP_MAX_NEW_TOKENS_ESTIMATION=512 SGLANG_SET_CPU_AFFINITY=true python -m sglang.launch_server --model /models/GLM-4.6-FP8/ --trust-remote-code --watchdog-timeout "1000000" --mem-fraction-static 0.9 --tp-size 8 --kv-cache-dtype fp8_e4m3 --disaggregation-mode decode --prefill-round-robin-balance --host 0.0.0.0 --chunked-prefill-size 16384 --attention-backend fa3 --max-running-requests 80 --enable-metrics --disaggregation-ib-device mlx5_0 --page-size 64 --speculative-algorithm NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 | https://github.com/sgl-project/sglang/issues/16220 | open | [] | 2025-12-31T10:19:04Z | 2026-01-04T01:52:56Z | 1 | dongliangwu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.