Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
667,020 | 22,406,524,506 | IssuesEvent | 2022-06-18 03:34:37 | containerd/nerdctl | https://api.github.com/repos/containerd/nerdctl | closed | Running `nerdctl compose up` immediately exists with "no such file or directory" on -json.log file | bug priority/high area/logging | ### Description
Running `nerdctl compose up` (without the `-d` detach flag) immediately exits with an error that the container *-json.log* file does not exist.
Running `nerdctl compose up -d; sleep 2; nerdctl compose logs -f` works; the containers are created and the log files exist and are tailed to stdout.
This behaviour occurs with both v0.20.0 (as installed by Lima) and current master (ddad5009be1618b213d0da43b03afd3df1e41852) compiled from source.
### Steps to reproduce the issue
1. Create `docker-compose.yml` file as below
2. Run `nerdctl compose up`
```
version: '3.1'
services:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
```
### Describe the results you received and expected
Expected results:
Container starts and log file is tailed to stdout.
Received results:
nerdctl immediately exists, with the following output (with `-debug` flag):
```
dave@lima-default psql-example % nerdctl compose up --debug
DEBU[0000] stateDir: /run/user/501/containerd-rootless
DEBU[0000] rootless parent main: executing "/usr/bin/nsenter" with [-r/ -w/home/dave.linux/dev/containers/psql-example --preserve-credentials -m -n -U -t 1610 -F nerdctl compose up --debug]
INFO[0000] Ensuring image postgres:latest
DEBU[0000] filters: [labels."com.docker.compose.project"==psql-example,labels."com.docker.compose.service"==db]
INFO[0000] Creating container psql-example_db_1
DEBU[0000] failed to run [aa-exec -p nerdctl-default -- true]: "aa-exec: ERROR: profile 'nerdctl-default' does not exist\n\n" error="exit status 1"
DEBU[0000] verification process skipped
DEBU[0000] creating anonymous volume "43cd122bfc77156a7e186835af920a0ece71334be29f77c931860b073a872ca7", for "VOLUME /var/lib/postgresql/data"
DEBU[0000] final cOpts is [0x6e5a40 0xaafa50 0x6e5e60 0x6e5bc0 0x6e58b0 0xab0e00 0xab1fa0 0x6e62f0]
INFO[0000] Attaching to logs
DEBU[0000] filters: [labels."com.docker.compose.project"==psql-example,labels."com.docker.compose.service"==db]
db_1 |time="2022-05-20T20:39:51Z" level=fatal msg="failed to open \"/home/dave.linux/.local/share/nerdctl/1935db59/containers/default/d3c03ca1e0c65d0bba6e43d24ab8bd5fc5820d276cf9ebda97894d07b2991d57/d3c03ca1e0c65d0bba6e43d24ab8bd5fc5820d276cf9ebda97894d07b2991d57-json.log\", container is not created with `nerdctl run -d`?: stat /home/dave.linux/.local/share/nerdctl/1935db59/containers/default/d3c03ca1e0c65d0bba6e43d24ab8bd5fc5820d276cf9ebda97894d07b2991d57/d3c03ca1e0c65d0bba6e43d24ab8bd5fc5820d276cf9ebda97894d07b2991d57-json.log: no such file or directory"
INFO[0000] Container "psql-example_db_1" exited
INFO[0000] All the containers have exited
INFO[0000] Stopping containers (forcibly)
INFO[0000] Stopping container psql-example_db_1
```
### What version of nerdctl are you using?
Client:
Version: v0.20.0
OS/Arch: linux/arm64
Git commit: e77e05b5fd252274e3727e0439e9a2d45622ccb9
Server:
containerd:
Version: v1.6.4
GitCommit: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
### Are you using a variant of nerdctl? (e.g., Rancher Desktop)
Lima
### Host information
Client:
Namespace: default
Debug Mode: false
Server:
Server Version: v1.6.4
Storage Driver: fuse-overlayfs
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Log: json-file
Storage: native overlayfs stargz fuse-overlayfs
Security Options:
apparmor
seccomp
Profile: default
cgroupns
rootless
Kernel Version: 5.10.0-13-arm64
Operating System: Debian GNU/Linux 11 (bullseye)
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 3.831GiB
Name: lima-default
ID: a2e0f2fe-8b65-4b55-b746-945a35e37c9f
WARNING: AppArmor profile "nerdctl-default" is not loaded.
Use 'sudo nerdctl apparmor load' if you prefer to use AppArmor with rootless mode.
This warning is negligible if you do not intend to use AppArmor.
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled | 1.0 | Running `nerdctl compose up` immediately exists with "no such file or directory" on -json.log file - ### Description
Running `nerdctl compose up` (without the `-d` detach flag) immediately exits with an error that the container *-json.log* file does not exist.
Running `nerdctl compose up -d; sleep 2; nerdctl compose logs -f` works; the containers are created and the log files exist and are tailed to stdout.
This behaviour occurs with both v0.20.0 (as installed by Lima) and current master (ddad5009be1618b213d0da43b03afd3df1e41852) compiled from source.
### Steps to reproduce the issue
1. Create `docker-compose.yml` file as below
2. Run `nerdctl compose up`
```
version: '3.1'
services:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
```
### Describe the results you received and expected
Expected results:
Container starts and log file is tailed to stdout.
Received results:
nerdctl immediately exists, with the following output (with `-debug` flag):
```
dave@lima-default psql-example % nerdctl compose up --debug
DEBU[0000] stateDir: /run/user/501/containerd-rootless
DEBU[0000] rootless parent main: executing "/usr/bin/nsenter" with [-r/ -w/home/dave.linux/dev/containers/psql-example --preserve-credentials -m -n -U -t 1610 -F nerdctl compose up --debug]
INFO[0000] Ensuring image postgres:latest
DEBU[0000] filters: [labels."com.docker.compose.project"==psql-example,labels."com.docker.compose.service"==db]
INFO[0000] Creating container psql-example_db_1
DEBU[0000] failed to run [aa-exec -p nerdctl-default -- true]: "aa-exec: ERROR: profile 'nerdctl-default' does not exist\n\n" error="exit status 1"
DEBU[0000] verification process skipped
DEBU[0000] creating anonymous volume "43cd122bfc77156a7e186835af920a0ece71334be29f77c931860b073a872ca7", for "VOLUME /var/lib/postgresql/data"
DEBU[0000] final cOpts is [0x6e5a40 0xaafa50 0x6e5e60 0x6e5bc0 0x6e58b0 0xab0e00 0xab1fa0 0x6e62f0]
INFO[0000] Attaching to logs
DEBU[0000] filters: [labels."com.docker.compose.project"==psql-example,labels."com.docker.compose.service"==db]
db_1 |time="2022-05-20T20:39:51Z" level=fatal msg="failed to open \"/home/dave.linux/.local/share/nerdctl/1935db59/containers/default/d3c03ca1e0c65d0bba6e43d24ab8bd5fc5820d276cf9ebda97894d07b2991d57/d3c03ca1e0c65d0bba6e43d24ab8bd5fc5820d276cf9ebda97894d07b2991d57-json.log\", container is not created with `nerdctl run -d`?: stat /home/dave.linux/.local/share/nerdctl/1935db59/containers/default/d3c03ca1e0c65d0bba6e43d24ab8bd5fc5820d276cf9ebda97894d07b2991d57/d3c03ca1e0c65d0bba6e43d24ab8bd5fc5820d276cf9ebda97894d07b2991d57-json.log: no such file or directory"
INFO[0000] Container "psql-example_db_1" exited
INFO[0000] All the containers have exited
INFO[0000] Stopping containers (forcibly)
INFO[0000] Stopping container psql-example_db_1
```
### What version of nerdctl are you using?
Client:
Version: v0.20.0
OS/Arch: linux/arm64
Git commit: e77e05b5fd252274e3727e0439e9a2d45622ccb9
Server:
containerd:
Version: v1.6.4
GitCommit: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
### Are you using a variant of nerdctl? (e.g., Rancher Desktop)
Lima
### Host information
Client:
Namespace: default
Debug Mode: false
Server:
Server Version: v1.6.4
Storage Driver: fuse-overlayfs
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Log: json-file
Storage: native overlayfs stargz fuse-overlayfs
Security Options:
apparmor
seccomp
Profile: default
cgroupns
rootless
Kernel Version: 5.10.0-13-arm64
Operating System: Debian GNU/Linux 11 (bullseye)
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 3.831GiB
Name: lima-default
ID: a2e0f2fe-8b65-4b55-b746-945a35e37c9f
WARNING: AppArmor profile "nerdctl-default" is not loaded.
Use 'sudo nerdctl apparmor load' if you prefer to use AppArmor with rootless mode.
This warning is negligible if you do not intend to use AppArmor.
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled | priority | running nerdctl compose up immediately exists with no such file or directory on json log file description running nerdctl compose up without the d detach flag immediately exits with an error that the container json log file does not exist running nerdctl compose up d sleep nerdctl compose logs f works the containers are created and the log files exist and are tailed to stdout this behaviour occurs with both as installed by lima and current master compiled from source steps to reproduce the issue create docker compose yml file as below run nerdctl compose up version services db image postgres latest environment postgres password postgres ports describe the results you received and expected expected results container starts and log file is tailed to stdout received results nerdctl immediately exists with the following output with debug flag dave lima default psql example nerdctl compose up debug debu statedir run user containerd rootless debu rootless parent main executing usr bin nsenter with info ensuring image postgres latest debu filters info creating container psql example db debu failed to run aa exec error profile nerdctl default does not exist n n error exit status debu verification process skipped debu creating anonymous volume for volume var lib postgresql data debu final copts is info attaching to logs debu filters db time level fatal msg failed to open home dave linux local share nerdctl containers default json log container is not created with nerdctl run d stat home dave linux local share nerdctl containers default json log no such file or directory info container psql example db exited info all the containers have exited info stopping containers forcibly info stopping container psql example db what version of nerdctl are you using client version os arch linux git commit server containerd version gitcommit are you using a variant of nerdctl e g rancher desktop lima host information client namespace default debug mode false server server version storage driver fuse overlayfs logging driver json file cgroup driver systemd cgroup version plugins log json file storage native overlayfs stargz fuse overlayfs security options apparmor seccomp profile default cgroupns rootless kernel version operating system debian gnu linux bullseye ostype linux architecture cpus total memory name lima default id warning apparmor profile nerdctl default is not loaded use sudo nerdctl apparmor load if you prefer to use apparmor with rootless mode this warning is negligible if you do not intend to use apparmor warning bridge nf call iptables is disabled warning bridge nf call is disabled | 1 |
80,798 | 3,574,630,821 | IssuesEvent | 2016-01-27 12:48:09 | leeensminger/OED_Wetlands | https://api.github.com/repos/leeensminger/OED_Wetlands | closed | Program Mitigation Summary report - Status per Watershed does not display values | bug - high priority | Mitigation Status per Watershed does not display values. The report should be displaying each Mitigation Status per watershed.
There are 7 pages to the report, which include all the 8 digit watersheds. I've reviewed each page looking for data, but here's a snippet:

| 1.0 | Program Mitigation Summary report - Status per Watershed does not display values - Mitigation Status per Watershed does not display values. The report should be displaying each Mitigation Status per watershed.
There are 7 pages to the report, which include all the 8 digit watersheds. I've reviewed each page looking for data, but here's a snippet:

| priority | program mitigation summary report status per watershed does not display values mitigation status per watershed does not display values the report should be displaying each mitigation status per watershed there are pages to the report which include all the digit watersheds i ve reviewed each page looking for data but here s a snippet | 1 |
670,203 | 22,679,846,502 | IssuesEvent | 2022-07-04 08:55:02 | opensrp/opensrp-client-anc | https://api.github.com/repos/opensrp/opensrp-client-anc | closed | [Ona Support Request]: Adding new questions to forms | 🚧 WIP high priority Tech Partner (SID Team) | ### Affected App or Server Version
local Indonesian apk
### What kind of support do you need?
ONA team has shared related documentation but the SID team is facing problems in implementing it. The SID team will share the latest code used for it
### What is the acceptance criteria for your support request?
The SID team can add, remove, and modify questions on the client app
### Relevant Information
_No response_ | 1.0 | [Ona Support Request]: Adding new questions to forms - ### Affected App or Server Version
local Indonesian apk
### What kind of support do you need?
ONA team has shared related documentation but the SID team is facing problems in implementing it. The SID team will share the latest code used for it
### What is the acceptance criteria for your support request?
The SID team can add, remove, and modify questions on the client app
### Relevant Information
_No response_ | priority | adding new questions to forms affected app or server version local indonesian apk what kind of support do you need ona team has shared related documentation but the sid team is facing problems in implementing it the sid team will share the latest code used for it what is the acceptance criteria for your support request the sid team can add remove and modify questions on the client app relevant information no response | 1 |
516,870 | 14,989,766,386 | IssuesEvent | 2021-01-29 04:41:22 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | torch.cuda.memory_allocated() doesn't correctly work until the context is initialized | high priority module: cuda module: docs module: memory usage triaged | ## 🐛 Bug
The manpage https://pytorch.org/docs/stable/cuda.html#torch.cuda.memory_allocated advertises that it's possible to pass an `int` argument but it doesn't work.
And even if I create a device argument it doesn't work correctly in multi-gpu settings - giving me stats for the first device for all other devices.
## To Reproduce
Steps to reproduce the behavior:
```
python -c 'import torch; print(torch.cuda.memory_allocated(0))'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 306, in memory_allocated
return memory_stats(device=device)["allocated_bytes.all.current"]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 187, in memory_stats
stats = memory_stats_as_nested_dict(device=device)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 197, in memory_stats_as_nested_dict
return torch._C._cuda_memoryStats(device)
RuntimeError: Invalid device argument.
```
If I make use of the context manager device as a device, then it works:
```
python -c 'import torch; print(torch.cuda.memory_allocated(torch.cuda.device("cuda:0")))'
0
```
It also starts working if I first call `torch.cuda.get_device_name(0))`:
```
python -c 'import torch; print(torch.cuda.get_device_name(0)); print(torch.cuda.memory_allocated(0))'
```
It also seems to just give device 0 stats regardless of what I pass to it, e.g. with 2 gpus:
```
python -c 'import torch; x=torch.ones(10<<20).to(0); y=torch.ones(10).to(1);print([torch.cuda.memory_allocated(torch.cuda.device(f"cuda:{id}")) for id in range(torch.cuda.device_count())])'
[41943040, 41943040]
```
`torch.cuda.memory_allocated` for device 1 returns the same stats as device 0.
If I replace `torch.cuda.device` with `torch.device` it works correctly. But then fails again if I don't do the `to()` calls:
```
python -c 'import torch; x=torch.ones(10<<10); y=torch.ones(10);print([torch.cuda.memory_allocated(torch.device(f"cuda:{id}")) for id in range(torch.cuda.device_count())]);'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 1, in <listcomp>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 306, in memory_allocated
return memory_stats(device=device)["allocated_bytes.all.current"]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 187, in memory_stats
stats = memory_stats_as_nested_dict(device=device)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 197, in memory_stats_as_nested_dict
return torch._C._cuda_memoryStats(device)
RuntimeError: Invalid device argument.
```
A pattern appears that this function only works once some operation on the desired device has been performed, and then it reports correctly only for that device and not others.
I'm seeing other ops having the same problem: e.g. [this](https://github.com/NVIDIA/apex/issues/1022) and [this](https://github.com/pytorch/pytorch/issues/21819) and the fix is to run some other safe op like `torch.cuda.set_device` which then fixes these unsafe ops.
It appears that the only way I can take a snapshot of memory allocations per device is with context manager:
```
per_device_memory = []
for id in range(torch.cuda.device_count()):
with torch.cuda.device(id):
per_device_memory.append(torch.cuda.memory_allocated())
```
and this is not possible:
```
per_device_memory = [torch.cuda.memory_allocated(id) for id in range(torch.cuda.device_count())]
```
which is very odd.
## Expected behavior
1. Either the documentation is wrong, or there is a bug and it should return the memory usage for an `int` device number.
2. The behavior seems to be inconsistent depending on what cuda functions are called prior to this function
3. `torch.device` and `torch.cuda.device` seem to be interchangeable in some situations but not others
4. it must not report memory usage for 0th device when it's being asked to report memory usage for other devices
## Environment
```
PyTorch version: 1.8.0.dev20201219+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.18.2
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX 1070 Ti
GPU 1: GeForce RTX 3090
Nvidia driver version: 455.45.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] pytorch-lightning==1.1.1rc0
[pip3] pytorch-memlab==0.2.2
[pip3] torch==1.8.0.dev20201219+cu110
[pip3] torchtext==0.6.0
[pip3] torchvision==0.9.0a0+f80b83e
[conda] blas 1.0 mkl
[conda] magma-cuda111 2.5.2 1 pytorch
[conda] mkl 2020.2 256
[conda] mkl-include 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.2.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.4 pypi_0 pypi
[conda] pytorch-lightning 1.1.1rc0 dev_0 <develop>
[conda] pytorch-memlab 0.2.2 pypi_0 pypi
[conda] torch 1.8.0.dev20201219+cu110 pypi_0 pypi
[conda] torchtext 0.6.0 pypi_0 pypi
[conda] torchvision 0.9.0a0+f80b83e dev_0 <develop>
```
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @ngimel @jlin27 @mruberry | 1.0 | torch.cuda.memory_allocated() doesn't correctly work until the context is initialized - ## 🐛 Bug
The manpage https://pytorch.org/docs/stable/cuda.html#torch.cuda.memory_allocated advertises that it's possible to pass an `int` argument but it doesn't work.
And even if I create a device argument it doesn't work correctly in multi-gpu settings - giving me stats for the first device for all other devices.
## To Reproduce
Steps to reproduce the behavior:
```
python -c 'import torch; print(torch.cuda.memory_allocated(0))'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 306, in memory_allocated
return memory_stats(device=device)["allocated_bytes.all.current"]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 187, in memory_stats
stats = memory_stats_as_nested_dict(device=device)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 197, in memory_stats_as_nested_dict
return torch._C._cuda_memoryStats(device)
RuntimeError: Invalid device argument.
```
If I make use of the context manager device as a device, then it works:
```
python -c 'import torch; print(torch.cuda.memory_allocated(torch.cuda.device("cuda:0")))'
0
```
It also starts working if I first call `torch.cuda.get_device_name(0))`:
```
python -c 'import torch; print(torch.cuda.get_device_name(0)); print(torch.cuda.memory_allocated(0))'
```
It also seems to just give device 0 stats regardless of what I pass to it, e.g. with 2 gpus:
```
python -c 'import torch; x=torch.ones(10<<20).to(0); y=torch.ones(10).to(1);print([torch.cuda.memory_allocated(torch.cuda.device(f"cuda:{id}")) for id in range(torch.cuda.device_count())])'
[41943040, 41943040]
```
`torch.cuda.memory_allocated` for device 1 returns the same stats as device 0.
If I replace `torch.cuda.device` with `torch.device` it works correctly. But then fails again if I don't do the `to()` calls:
```
python -c 'import torch; x=torch.ones(10<<10); y=torch.ones(10);print([torch.cuda.memory_allocated(torch.device(f"cuda:{id}")) for id in range(torch.cuda.device_count())]);'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 1, in <listcomp>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 306, in memory_allocated
return memory_stats(device=device)["allocated_bytes.all.current"]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 187, in memory_stats
stats = memory_stats_as_nested_dict(device=device)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/memory.py", line 197, in memory_stats_as_nested_dict
return torch._C._cuda_memoryStats(device)
RuntimeError: Invalid device argument.
```
A pattern appears that this function only works once some operation on the desired device has been performed, and then it reports correctly only for that device and not others.
I'm seeing other ops having the same problem: e.g. [this](https://github.com/NVIDIA/apex/issues/1022) and [this](https://github.com/pytorch/pytorch/issues/21819) and the fix is to run some other safe op like `torch.cuda.set_device` which then fixes these unsafe ops.
It appears that the only way I can take a snapshot of memory allocations per device is with context manager:
```
per_device_memory = []
for id in range(torch.cuda.device_count()):
with torch.cuda.device(id):
per_device_memory.append(torch.cuda.memory_allocated())
```
and this is not possible:
```
per_device_memory = [torch.cuda.memory_allocated(id) for id in range(torch.cuda.device_count())]
```
which is very odd.
## Expected behavior
1. Either the documentation is wrong, or there is a bug and it should return the memory usage for an `int` device number.
2. The behavior seems to be inconsistent depending on what cuda functions are called prior to this function
3. `torch.device` and `torch.cuda.device` seem to be interchangeable in some situations but not others
4. it must not report memory usage for 0th device when it's being asked to report memory usage for other devices
## Environment
```
PyTorch version: 1.8.0.dev20201219+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.18.2
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX 1070 Ti
GPU 1: GeForce RTX 3090
Nvidia driver version: 455.45.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] pytorch-lightning==1.1.1rc0
[pip3] pytorch-memlab==0.2.2
[pip3] torch==1.8.0.dev20201219+cu110
[pip3] torchtext==0.6.0
[pip3] torchvision==0.9.0a0+f80b83e
[conda] blas 1.0 mkl
[conda] magma-cuda111 2.5.2 1 pytorch
[conda] mkl 2020.2 256
[conda] mkl-include 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.2.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.4 pypi_0 pypi
[conda] pytorch-lightning 1.1.1rc0 dev_0 <develop>
[conda] pytorch-memlab 0.2.2 pypi_0 pypi
[conda] torch 1.8.0.dev20201219+cu110 pypi_0 pypi
[conda] torchtext 0.6.0 pypi_0 pypi
[conda] torchvision 0.9.0a0+f80b83e dev_0 <develop>
```
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @ngimel @jlin27 @mruberry | priority | torch cuda memory allocated doesn t correctly work until the context is initialized 🐛 bug the manpage advertises that it s possible to pass an int argument but it doesn t work and even if i create a device argument it doesn t work correctly in multi gpu settings giving me stats for the first device for all other devices to reproduce steps to reproduce the behavior python c import torch print torch cuda memory allocated traceback most recent call last file line in file home stas envs main lib site packages torch cuda memory py line in memory allocated return memory stats device device file home stas envs main lib site packages torch cuda memory py line in memory stats stats memory stats as nested dict device device file home stas envs main lib site packages torch cuda memory py line in memory stats as nested dict return torch c cuda memorystats device runtimeerror invalid device argument if i make use of the context manager device as a device then it works python c import torch print torch cuda memory allocated torch cuda device cuda it also starts working if i first call torch cuda get device name python c import torch print torch cuda get device name print torch cuda memory allocated it also seems to just give device stats regardless of what i pass to it e g with gpus python c import torch x torch ones to y torch ones to print torch cuda memory allocated for device returns the same stats as device if i replace torch cuda device with torch device it works correctly but then fails again if i don t do the to calls python c import torch x torch ones y torch ones print traceback most recent call last file line in file line in file home stas envs main lib site packages torch cuda memory py line in memory allocated return memory stats device device file home stas envs main lib site packages torch cuda memory py line in memory stats stats memory stats as nested dict device device file home stas envs main lib site packages torch cuda memory py line in memory stats as nested dict return torch c cuda memorystats device runtimeerror invalid device argument a pattern appears that this function only works once some operation on the desired device has been performed and then it reports correctly only for that device and not others i m seeing other ops having the same problem e g and and the fix is to run some other safe op like torch cuda set device which then fixes these unsafe ops it appears that the only way i can take a snapshot of memory allocations per device is with context manager per device memory for id in range torch cuda device count with torch cuda device id per device memory append torch cuda memory allocated and this is not possible per device memory which is very odd expected behavior either the documentation is wrong or there is a bug and it should return the memory usage for an int device number the behavior seems to be inconsistent depending on what cuda functions are called prior to this function torch device and torch cuda device seem to be interchangeable in some situations but not others it must not report memory usage for device when it s being asked to report memory usage for other devices environment pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version cmake version version python version bit runtime is cuda available true cuda runtime version could not collect gpu models and configuration gpu geforce gtx ti gpu geforce rtx nvidia driver version cudnn version probably one of the following usr lib linux gnu libcudnn so usr lib linux gnu libcudnn so usr lib linux gnu libcudnn adv infer so usr lib linux gnu libcudnn adv train so usr lib linux gnu libcudnn cnn infer so usr lib linux gnu libcudnn cnn train so usr lib linux gnu libcudnn ops infer so usr lib linux gnu libcudnn ops train so hip runtime version n a miopen runtime version n a versions of relevant libraries numpy pytorch lightning pytorch memlab torch torchtext torchvision blas mkl magma pytorch mkl mkl include mkl service mkl fft mkl random numpy pypi pypi pytorch lightning dev pytorch memlab pypi pypi torch pypi pypi torchtext pypi pypi torchvision dev cc ezyang gchanan bdhirsh jbschlosser ngimel mruberry | 1 |
608,942 | 18,851,666,071 | IssuesEvent | 2021-11-11 21:47:38 | nothingworksright/api.recipe.report | https://api.github.com/repos/nothingworksright/api.recipe.report | closed | Better status code responses | 🚀 Enhancement 🧑 Complexity 3 - Some ⚡ Priority 3 - High 🤫 Severity 1 - Mild | For now most things are either 200 or 500. There should be better specific codes and 400 codes depending on the reason for the error. | 1.0 | Better status code responses - For now most things are either 200 or 500. There should be better specific codes and 400 codes depending on the reason for the error. | priority | better status code responses for now most things are either or there should be better specific codes and codes depending on the reason for the error | 1 |
524,742 | 15,222,655,247 | IssuesEvent | 2021-02-18 00:47:36 | canonical-web-and-design/maas-ui | https://api.github.com/repos/canonical-web-and-design/maas-ui | closed | Searching(GUI) for machines with the name starting with number does not work in version 2.8.2 | Priority: High | Bug originally filed by pedrovlf at https://bugs.launchpad.net/bugs/1912529
In version 2.8.2(snap) of MAAS when trying to use the GUI to search for a machine that starts with a name, it doesn't bring any results, it doesn't work(search-maas1.png).
e.g. Machine named 87ontario2-compute016 does not work
When trying to search using hostname:=87ontario2-compute016 this works. | 1.0 | Searching(GUI) for machines with the name starting with number does not work in version 2.8.2 - Bug originally filed by pedrovlf at https://bugs.launchpad.net/bugs/1912529
In version 2.8.2(snap) of MAAS when trying to use the GUI to search for a machine that starts with a name, it doesn't bring any results, it doesn't work(search-maas1.png).
e.g. Machine named 87ontario2-compute016 does not work
When trying to search using hostname:=87ontario2-compute016 this works. | priority | searching gui for machines with the name starting with number does not work in version bug originally filed by pedrovlf at in version snap of maas when trying to use the gui to search for a machine that starts with a name it doesn t bring any results it doesn t work search png e g machine named does not work when trying to search using hostname this works | 1 |
76,572 | 3,489,313,363 | IssuesEvent | 2016-01-03 19:42:45 | horaklukas/posterator | https://api.github.com/repos/horaklukas/posterator | opened | Unsaved titles changes | enhancement priority-high | Add popup window warning about unsaved changes in poster when any action that could cause data lost happen. For better protection against data lost, unsaved changes could be saved in some cache (eg. cookies) and when app start restoring of them would be offered.
| 1.0 | Unsaved titles changes - Add popup window warning about unsaved changes in poster when any action that could cause data lost happen. For better protection against data lost, unsaved changes could be saved in some cache (eg. cookies) and when app start restoring of them would be offered.
| priority | unsaved titles changes add popup window warning about unsaved changes in poster when any action that could cause data lost happen for better protection against data lost unsaved changes could be saved in some cache eg cookies and when app start restoring of them would be offered | 1 |
239,296 | 7,788,422,276 | IssuesEvent | 2018-06-07 04:34:54 | hackoregon/civic-devops | https://api.github.com/repos/hackoregon/civic-devops | closed | Implement smoke test for containers that currently deploy with only dummy tests | Priority: high bug | Some API projects have been implemented with dummy tests that just call `pass`. This was done to get around a Travis issue where it would fail every job when it detected 0 tests in the project.
Since then we've had many containers that deploy without being able to pass the basic "health check" from ALB (which is implemented as a GET request for the base route for each API project, looking for a 200-299 HTTP response). e.g. for the Local Elections API, ALB requests `/local-elections/` and is currently receiving a 404 Not Found response (see [bug 3](https://github.com/hackoregon/elections-2018-backend/issues/3) for details). (Also see #140, and ignore the fact I closed that issue without addressing it.)
So to prevent these jank containers from deploying, and causing a lot of ECS thrash (cf. #149), we need at least a bare minimum test that validates the following:
- gunicorn is running
- Django code isn’t crashing, and
- there’s a 200 responding on at least the tested endpoint
@BrianHGrant is the Django man - as always, he volunteered up the perfect snippet:
```
from django.test import TestCase
from rest_framework.test import APIClient, RequestsClient
class RootTest(TestCase):
""" Test for Root Endpoint"""
def setUp(self):
pass
class RootEndpointsTestCase(TestCase):
def setUp(self):
self.client = APIClient()
def test_list_200_response(self):
response = self.client.get('/transportation-systems/')
assert response.status_code == 200
```
I realize it’s not actually inspecting the response to see that it’s Django, nor whether the response is a valid output e.g. good HTML, valid JSON, anything like that. However, what with the `pass` “tests” we’ve implemented in most projects to get past the test phase of Travis, this has to be far better, and should catch some of the Django startup issues that we’ve been screwed by, before they get deployed to ECS.
Let's get this into all the API projects:
- [x] [disaster-resilience-backend](https://github.com/hackoregon/disaster-resilience-backend)
- [x] [elections-2018-backend](https://github.com/hackoregon/elections-2018-backend)
- [x] [housing-2018](https://github.com/hackoregon/housing-2018)
- [x] [neighborhoods-2018](https://github.com/hackoregon/neighborhoods-2018)
- [x] [transportation-systems-backend-2018](https://github.com/hackoregon/transportation-systems-backend-2018)
| 1.0 | Implement smoke test for containers that currently deploy with only dummy tests - Some API projects have been implemented with dummy tests that just call `pass`. This was done to get around a Travis issue where it would fail every job when it detected 0 tests in the project.
Since then we've had many containers that deploy without being able to pass the basic "health check" from ALB (which is implemented as a GET request for the base route for each API project, looking for a 200-299 HTTP response). e.g. for the Local Elections API, ALB requests `/local-elections/` and is currently receiving a 404 Not Found response (see [bug 3](https://github.com/hackoregon/elections-2018-backend/issues/3) for details). (Also see #140, and ignore the fact I closed that issue without addressing it.)
So to prevent these jank containers from deploying, and causing a lot of ECS thrash (cf. #149), we need at least a bare minimum test that validates the following:
- gunicorn is running
- Django code isn’t crashing, and
- there’s a 200 responding on at least the tested endpoint
@BrianHGrant is the Django man - as always, he volunteered up the perfect snippet:
```
from django.test import TestCase
from rest_framework.test import APIClient, RequestsClient
class RootTest(TestCase):
""" Test for Root Endpoint"""
def setUp(self):
pass
class RootEndpointsTestCase(TestCase):
def setUp(self):
self.client = APIClient()
def test_list_200_response(self):
response = self.client.get('/transportation-systems/')
assert response.status_code == 200
```
I realize it’s not actually inspecting the response to see that it’s Django, nor whether the response is a valid output e.g. good HTML, valid JSON, anything like that. However, what with the `pass` “tests” we’ve implemented in most projects to get past the test phase of Travis, this has to be far better, and should catch some of the Django startup issues that we’ve been screwed by, before they get deployed to ECS.
Let's get this into all the API projects:
- [x] [disaster-resilience-backend](https://github.com/hackoregon/disaster-resilience-backend)
- [x] [elections-2018-backend](https://github.com/hackoregon/elections-2018-backend)
- [x] [housing-2018](https://github.com/hackoregon/housing-2018)
- [x] [neighborhoods-2018](https://github.com/hackoregon/neighborhoods-2018)
- [x] [transportation-systems-backend-2018](https://github.com/hackoregon/transportation-systems-backend-2018)
| priority | implement smoke test for containers that currently deploy with only dummy tests some api projects have been implemented with dummy tests that just call pass this was done to get around a travis issue where it would fail every job when it detected tests in the project since then we ve had many containers that deploy without being able to pass the basic health check from alb which is implemented as a get request for the base route for each api project looking for a http response e g for the local elections api alb requests local elections and is currently receiving a not found response see for details also see and ignore the fact i closed that issue without addressing it so to prevent these jank containers from deploying and causing a lot of ecs thrash cf we need at least a bare minimum test that validates the following gunicorn is running django code isn’t crashing and there’s a responding on at least the tested endpoint brianhgrant is the django man as always he volunteered up the perfect snippet from django test import testcase from rest framework test import apiclient requestsclient class roottest testcase test for root endpoint def setup self pass class rootendpointstestcase testcase def setup self self client apiclient def test list response self response self client get transportation systems assert response status code i realize it’s not actually inspecting the response to see that it’s django nor whether the response is a valid output e g good html valid json anything like that however what with the pass “tests” we’ve implemented in most projects to get past the test phase of travis this has to be far better and should catch some of the django startup issues that we’ve been screwed by before they get deployed to ecs let s get this into all the api projects | 1 |
779,889 | 27,370,313,420 | IssuesEvent | 2023-02-27 22:49:39 | Automattic/woocommerce-payments | https://api.github.com/repos/Automattic/woocommerce-payments | opened | Fatal errors with WC Subscriptions installed | type: bug priority: high component: wc subscriptions integration | ### Describe the bug
If WC Subscriptions detects an old version of WooCommerce's database it deactivates itself. If WCPay
is also in use this can cause Fatal Errors.
```
Fatal error: Uncaught Error: Class "WC_Subscriptions_Cart" not found
in woocommerce-payments/includes/compat/subscriptions/trait-wc-payments-subscriptions-utilities.php on line 87
```
```
Fatal error: Uncaught Error: Class 'WC_Subscriptions_Product' not found in woocommerce-
payments/includes/subscriptions/class-wc-payments-product-service.php:192
```
### To Reproduce
Fatal errors can be triggered by manually updating $wc_minimum_required_version in woocommerce-subscriptions/woocommerce-subscriptions.php to something above the current WC version eg 7.5
### Additional context
<!-- Any additional context or details you think might be helpful. -->
<!-- Ticket numbers/links, plugin versions, system statuses etc. -->
5924491-zd-woothemes
5921622-zd-woothemes
https://wordpress.org/support/topic/fatal-error-4375/ | 1.0 | Fatal errors with WC Subscriptions installed - ### Describe the bug
If WC Subscriptions detects an old version of WooCommerce's database it deactivates itself. If WCPay
is also in use this can cause Fatal Errors.
```
Fatal error: Uncaught Error: Class "WC_Subscriptions_Cart" not found
in woocommerce-payments/includes/compat/subscriptions/trait-wc-payments-subscriptions-utilities.php on line 87
```
```
Fatal error: Uncaught Error: Class 'WC_Subscriptions_Product' not found in woocommerce-
payments/includes/subscriptions/class-wc-payments-product-service.php:192
```
### To Reproduce
Fatal errors can be triggered by manually updating $wc_minimum_required_version in woocommerce-subscriptions/woocommerce-subscriptions.php to something above the current WC version eg 7.5
### Additional context
<!-- Any additional context or details you think might be helpful. -->
<!-- Ticket numbers/links, plugin versions, system statuses etc. -->
5924491-zd-woothemes
5921622-zd-woothemes
https://wordpress.org/support/topic/fatal-error-4375/ | priority | fatal errors with wc subscriptions installed describe the bug if wc subscriptions detects an old version of woocommerce s database it deactivates itself if wcpay is also in use this can cause fatal errors fatal error uncaught error class wc subscriptions cart not found in woocommerce payments includes compat subscriptions trait wc payments subscriptions utilities php on line fatal error uncaught error class wc subscriptions product not found in woocommerce payments includes subscriptions class wc payments product service php to reproduce fatal errors can be triggered by manually updating wc minimum required version in woocommerce subscriptions woocommerce subscriptions php to something above the current wc version eg additional context zd woothemes zd woothemes | 1 |
274,385 | 8,560,378,001 | IssuesEvent | 2018-11-09 00:56:26 | OpenSRP/opensrp-server-web | https://api.github.com/repos/OpenSRP/opensrp-server-web | opened | Create a postgres data warehouse database on Amazon RDS | Data Warehouse Priority: High | We need a data storage mechanism for the data warehouse. The demo, field testing and pilot system will need to be run by Ona. This ticket focuses on setting up an Amazon RDS instance for Reveal's data warehouse.
- [ ] Create an RDS instance on AWS and add it to the appropriate setup scripts. | 1.0 | Create a postgres data warehouse database on Amazon RDS - We need a data storage mechanism for the data warehouse. The demo, field testing and pilot system will need to be run by Ona. This ticket focuses on setting up an Amazon RDS instance for Reveal's data warehouse.
- [ ] Create an RDS instance on AWS and add it to the appropriate setup scripts. | priority | create a postgres data warehouse database on amazon rds we need a data storage mechanism for the data warehouse the demo field testing and pilot system will need to be run by ona this ticket focuses on setting up an amazon rds instance for reveal s data warehouse create an rds instance on aws and add it to the appropriate setup scripts | 1 |
592,764 | 17,929,692,004 | IssuesEvent | 2021-09-10 07:32:38 | dodona-edu/dodona | https://api.github.com/repos/dodona-edu/dodona | closed | Use stepper component in evaluation wizard | feature high priority | 
This is the first screen a user gets when starting an evaluation and this might be optimized a bit. A user has to set a deadline here and indicate if he wants to enable grades. The rest is just extra information (eg, the series overview) but this isn't super clear.
- include something about grades in the info box
- make it clearer that the user has to pick a deadline and can enable grades. Some ideas which may or may not work:
- slightly change the wording, eg Deadline -> Pick a deadline
- add a "Settings" header to separate the top part of the page from the bottom part
- Add some text to the top of the series overview that makes it clear that the listing of exercises is just informational
- Maybe also tweak the card title to make it more clear that we started some sort of wizard, eg "Configuring a series evaluation"
_Originally posted by @bmesuere in https://github.com/dodona-edu/dodona/issues/2462#issuecomment-785808920_
_The score-related options have been removed from this page, but we can still check if we can apply some of the other suggestions_. | 1.0 | Use stepper component in evaluation wizard - 
This is the first screen a user gets when starting an evaluation and this might be optimized a bit. A user has to set a deadline here and indicate if he wants to enable grades. The rest is just extra information (eg, the series overview) but this isn't super clear.
- include something about grades in the info box
- make it clearer that the user has to pick a deadline and can enable grades. Some ideas which may or may not work:
- slightly change the wording, eg Deadline -> Pick a deadline
- add a "Settings" header to separate the top part of the page from the bottom part
- Add some text to the top of the series overview that makes it clear that the listing of exercises is just informational
- Maybe also tweak the card title to make it more clear that we started some sort of wizard, eg "Configuring a series evaluation"
_Originally posted by @bmesuere in https://github.com/dodona-edu/dodona/issues/2462#issuecomment-785808920_
_The score-related options have been removed from this page, but we can still check if we can apply some of the other suggestions_. | priority | use stepper component in evaluation wizard this is the first screen a user gets when starting an evaluation and this might be optimized a bit a user has to set a deadline here and indicate if he wants to enable grades the rest is just extra information eg the series overview but this isn t super clear include something about grades in the info box make it clearer that the user has to pick a deadline and can enable grades some ideas which may or may not work slightly change the wording eg deadline pick a deadline add a settings header to separate the top part of the page from the bottom part add some text to the top of the series overview that makes it clear that the listing of exercises is just informational maybe also tweak the card title to make it more clear that we started some sort of wizard eg configuring a series evaluation originally posted by bmesuere in the score related options have been removed from this page but we can still check if we can apply some of the other suggestions | 1 |
100,056 | 4,075,913,597 | IssuesEvent | 2016-05-29 14:57:40 | moria0525/MadeinJLM-students | https://api.github.com/repos/moria0525/MadeinJLM-students | closed | Profile development | 1 - Ready Dor-H moria0525 Points: 8 Priority: Very High sh00ki |
## User Story Template
- As a user
- I want a profile with all my information
- So that companies will know more about me
## Bug Template
#### Expected behavior
#### Actual behavior
#### Steps to reproduce the behavior
<!---
@huboard:{"order":27.0,"milestone_order":45,"custom_state":""}
-->
| 1.0 | Profile development -
## User Story Template
- As a user
- I want a profile with all my information
- So that companies will know more about me
## Bug Template
#### Expected behavior
#### Actual behavior
#### Steps to reproduce the behavior
<!---
@huboard:{"order":27.0,"milestone_order":45,"custom_state":""}
-->
| priority | profile development user story template as a user i want a profile with all my information so that companies will know more about me bug template expected behavior actual behavior steps to reproduce the behavior huboard order milestone order custom state | 1 |
779,381 | 27,351,061,717 | IssuesEvent | 2023-02-27 09:35:34 | orden-gg/fireball | https://api.github.com/repos/orden-gg/fireball | opened | Move ClientContext to store | enhancement priority: high | Right now `ClientContext` has a lot of complexity and unclearness, would be good to move it into a store to separate business logic and reduce complexity. | 1.0 | Move ClientContext to store - Right now `ClientContext` has a lot of complexity and unclearness, would be good to move it into a store to separate business logic and reduce complexity. | priority | move clientcontext to store right now clientcontext has a lot of complexity and unclearness would be good to move it into a store to separate business logic and reduce complexity | 1 |
403,333 | 11,839,349,720 | IssuesEvent | 2020-03-23 17:01:08 | ansible/galaxy_ng | https://api.github.com/repos/ansible/galaxy_ng | opened | Importer: build image using pulp_container | area/importer priority/high status/new type/enhancement | Subtask of #6
When running the import process outside of c.rh.c, use `pulp_container` to integrate with Buildah and create the needed container image. | 1.0 | Importer: build image using pulp_container - Subtask of #6
When running the import process outside of c.rh.c, use `pulp_container` to integrate with Buildah and create the needed container image. | priority | importer build image using pulp container subtask of when running the import process outside of c rh c use pulp container to integrate with buildah and create the needed container image | 1 |
405,582 | 11,879,175,695 | IssuesEvent | 2020-03-27 08:11:43 | Baschdl/match4healthcare | https://api.github.com/repos/Baschdl/match4healthcare | opened | Log-In Form: Add note concerning username and e-mail delivery time | dev-ready high-priority | Bitte Hinweis am Nutzerformular einfügen:
> Deine e-Mail Adresse ist gleichzeitig dein Benutzername. In Einzelfällen kann es etwas dauern, bis die Bestätigungsmail dich erreicht, da einige e-Mail Anbieter diese nicht sofort durchlassen
Reason:
https://match4healthcare.slack.com/archives/C010K4T5YGZ/p1585245475001000

(Plus weitere Berichte) | 1.0 | Log-In Form: Add note concerning username and e-mail delivery time - Bitte Hinweis am Nutzerformular einfügen:
> Deine e-Mail Adresse ist gleichzeitig dein Benutzername. In Einzelfällen kann es etwas dauern, bis die Bestätigungsmail dich erreicht, da einige e-Mail Anbieter diese nicht sofort durchlassen
Reason:
https://match4healthcare.slack.com/archives/C010K4T5YGZ/p1585245475001000

(Plus weitere Berichte) | priority | log in form add note concerning username and e mail delivery time bitte hinweis am nutzerformular einfügen deine e mail adresse ist gleichzeitig dein benutzername in einzelfällen kann es etwas dauern bis die bestätigungsmail dich erreicht da einige e mail anbieter diese nicht sofort durchlassen reason plus weitere berichte | 1 |
494,378 | 14,256,423,249 | IssuesEvent | 2020-11-20 00:58:38 | phetsims/chipper | https://api.github.com/repos/phetsims/chipper | reopened | [mac specific] No Xcode or CLT version detected | meeting:developer priority:2-high type:bug | Related to https://github.com/phetsims/chipper/issues/990
After updating to Node 14, then cleaning node_modules and running `npm install`, I see these errors:
```
~/apache-document-root/main/energy-skate-park$ cd ../chipper/
~/apache-document-root/main/chipper$ rm -rf node_modules/
~/apache-document-root/main/chipper$ npm install
> fsevents@1.2.13 install /Users/samreid/apache-document-root/main/chipper/node_modules/watchpack-chokidar2/node_modules/fsevents
> node install.js
No receipt for 'com.apple.pkg.CLTools_Executables' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLILeo' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLI' found at '/'.
gyp: No Xcode or CLT version detected!
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/Users/samreid/.npm-global/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:351:16)
gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
gyp ERR! System Darwin 19.6.0
gyp ERR! command "/usr/local/bin/node" "/Users/samreid/.npm-global/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/samreid/apache-document-root/main/chipper/node_modules/watchpack-chokidar2/node_modules/fsevents
gyp ERR! node -v v14.15.0
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
> fsevents@1.2.13 install /Users/samreid/apache-document-root/main/chipper/node_modules/webpack-dev-server/node_modules/fsevents
> node install.js
No receipt for 'com.apple.pkg.CLTools_Executables' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLILeo' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLI' found at '/'.
gyp: No Xcode or CLT version detected!
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/Users/samreid/.npm-global/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:351:16)
gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
gyp ERR! System Darwin 19.6.0
gyp ERR! command "/usr/local/bin/node" "/Users/samreid/.npm-global/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/samreid/apache-document-root/main/chipper/node_modules/webpack-dev-server/node_modules/fsevents
gyp ERR! node -v v14.15.0
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
> puppeteer@2.1.1 install /Users/samreid/apache-document-root/main/chipper/node_modules/puppeteer
> node install.js
Downloading Chromium r722234 - 116.4 Mb [====================] 100% 0.0s
Chromium downloaded to /Users/samreid/apache-document-root/main/chipper/node_modules/puppeteer/.local-chromium/mac-722234
added 1053 packages from 1388 contributors and audited 1055 packages in 23.514s
44 packages are looking for funding
run `npm fund` for details
found 4 vulnerabilities (1 low, 1 moderate, 2 high)
run `npm audit fix` to fix them, or `npm audit` for details
~/apache-document-root/main/chipper$ cd ../energy-skate-park
~/apache-document-root/main/energy-skate-park$ grunt
(node:215) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency
(Use `node --trace-warnings ...` to show where the warning was created)
Running "lint-all" task
Running "report-media" task
Running "clean" task
Running "build" task
Building runnable repository (energy-skate-park, brands: phet, phet-io)
Building brand: phet
>> Webpack build complete: 2778ms
>> Unused images module: energy-skate-park/images/skater-icon_png.js
>> Production minification complete: 19184ms (2172328 bytes)
>> Debug minification complete: 0ms (6656430 bytes)
Building brand: phet-io
>> Webpack build complete: 2079ms
>> Unused images module: energy-skate-park/images/skater-icon_png.js
>> Production minification complete: 15328ms (2186122 bytes)
>> Debug minification complete: 16000ms (2453198 bytes)
>> No client guides found at ../phet-io-client-guides/energy-skate-park/, no guides being built.
Done.
```
Assigning to @ariel-phet to prioritize and delegate. | 1.0 | [mac specific] No Xcode or CLT version detected - Related to https://github.com/phetsims/chipper/issues/990
After updating to Node 14, then cleaning node_modules and running `npm install`, I see these errors:
```
~/apache-document-root/main/energy-skate-park$ cd ../chipper/
~/apache-document-root/main/chipper$ rm -rf node_modules/
~/apache-document-root/main/chipper$ npm install
> fsevents@1.2.13 install /Users/samreid/apache-document-root/main/chipper/node_modules/watchpack-chokidar2/node_modules/fsevents
> node install.js
No receipt for 'com.apple.pkg.CLTools_Executables' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLILeo' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLI' found at '/'.
gyp: No Xcode or CLT version detected!
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/Users/samreid/.npm-global/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:351:16)
gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
gyp ERR! System Darwin 19.6.0
gyp ERR! command "/usr/local/bin/node" "/Users/samreid/.npm-global/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/samreid/apache-document-root/main/chipper/node_modules/watchpack-chokidar2/node_modules/fsevents
gyp ERR! node -v v14.15.0
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
> fsevents@1.2.13 install /Users/samreid/apache-document-root/main/chipper/node_modules/webpack-dev-server/node_modules/fsevents
> node install.js
No receipt for 'com.apple.pkg.CLTools_Executables' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLILeo' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLI' found at '/'.
gyp: No Xcode or CLT version detected!
gyp ERR! configure error
gyp ERR! stack Error: `gyp` failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/Users/samreid/.npm-global/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:351:16)
gyp ERR! stack at ChildProcess.emit (events.js:315:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12)
gyp ERR! System Darwin 19.6.0
gyp ERR! command "/usr/local/bin/node" "/Users/samreid/.npm-global/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/samreid/apache-document-root/main/chipper/node_modules/webpack-dev-server/node_modules/fsevents
gyp ERR! node -v v14.15.0
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
> puppeteer@2.1.1 install /Users/samreid/apache-document-root/main/chipper/node_modules/puppeteer
> node install.js
Downloading Chromium r722234 - 116.4 Mb [====================] 100% 0.0s
Chromium downloaded to /Users/samreid/apache-document-root/main/chipper/node_modules/puppeteer/.local-chromium/mac-722234
added 1053 packages from 1388 contributors and audited 1055 packages in 23.514s
44 packages are looking for funding
run `npm fund` for details
found 4 vulnerabilities (1 low, 1 moderate, 2 high)
run `npm audit fix` to fix them, or `npm audit` for details
~/apache-document-root/main/chipper$ cd ../energy-skate-park
~/apache-document-root/main/energy-skate-park$ grunt
(node:215) Warning: Accessing non-existent property 'padLevels' of module exports inside circular dependency
(Use `node --trace-warnings ...` to show where the warning was created)
Running "lint-all" task
Running "report-media" task
Running "clean" task
Running "build" task
Building runnable repository (energy-skate-park, brands: phet, phet-io)
Building brand: phet
>> Webpack build complete: 2778ms
>> Unused images module: energy-skate-park/images/skater-icon_png.js
>> Production minification complete: 19184ms (2172328 bytes)
>> Debug minification complete: 0ms (6656430 bytes)
Building brand: phet-io
>> Webpack build complete: 2079ms
>> Unused images module: energy-skate-park/images/skater-icon_png.js
>> Production minification complete: 15328ms (2186122 bytes)
>> Debug minification complete: 16000ms (2453198 bytes)
>> No client guides found at ../phet-io-client-guides/energy-skate-park/, no guides being built.
Done.
```
Assigning to @ariel-phet to prioritize and delegate. | priority | no xcode or clt version detected related to after updating to node then cleaning node modules and running npm install i see these errors apache document root main energy skate park cd chipper apache document root main chipper rm rf node modules apache document root main chipper npm install fsevents install users samreid apache document root main chipper node modules watchpack node modules fsevents node install js no receipt for com apple pkg cltools executables found at no receipt for com apple pkg developertoolsclileo found at no receipt for com apple pkg developertoolscli found at gyp no xcode or clt version detected gyp err configure error gyp err stack error gyp failed with exit code gyp err stack at childprocess oncpexit users samreid npm global lib node modules npm node modules node gyp lib configure js gyp err stack at childprocess emit events js gyp err stack at process childprocess handle onexit internal child process js gyp err system darwin gyp err command usr local bin node users samreid npm global lib node modules npm node modules node gyp bin node gyp js rebuild gyp err cwd users samreid apache document root main chipper node modules watchpack node modules fsevents gyp err node v gyp err node gyp v gyp err not ok fsevents install users samreid apache document root main chipper node modules webpack dev server node modules fsevents node install js no receipt for com apple pkg cltools executables found at no receipt for com apple pkg developertoolsclileo found at no receipt for com apple pkg developertoolscli found at gyp no xcode or clt version detected gyp err configure error gyp err stack error gyp failed with exit code gyp err stack at childprocess oncpexit users samreid npm global lib node modules npm node modules node gyp lib configure js gyp err stack at childprocess emit events js gyp err stack at process childprocess handle onexit internal child process js gyp err system darwin gyp err command usr local bin node users samreid npm global lib node modules npm node modules node gyp bin node gyp js rebuild gyp err cwd users samreid apache document root main chipper node modules webpack dev server node modules fsevents gyp err node v gyp err node gyp v gyp err not ok puppeteer install users samreid apache document root main chipper node modules puppeteer node install js downloading chromium mb chromium downloaded to users samreid apache document root main chipper node modules puppeteer local chromium mac added packages from contributors and audited packages in packages are looking for funding run npm fund for details found vulnerabilities low moderate high run npm audit fix to fix them or npm audit for details apache document root main chipper cd energy skate park apache document root main energy skate park grunt node warning accessing non existent property padlevels of module exports inside circular dependency use node trace warnings to show where the warning was created running lint all task running report media task running clean task running build task building runnable repository energy skate park brands phet phet io building brand phet webpack build complete unused images module energy skate park images skater icon png js production minification complete bytes debug minification complete bytes building brand phet io webpack build complete unused images module energy skate park images skater icon png js production minification complete bytes debug minification complete bytes no client guides found at phet io client guides energy skate park no guides being built done assigning to ariel phet to prioritize and delegate | 1 |
612,530 | 19,024,640,556 | IssuesEvent | 2021-11-24 00:52:28 | microsoft/fluentui | https://api.github.com/repos/microsoft/fluentui | closed | Label needs ARIA hidden on red asterisk | Priority 1: High Component: Label Status: In PR Fluent UI vNext Fit and Finish | This was originally opened by @yvonne-chien-ms in #20139. Splitting into separate issues for each bug logged in the original issue.
### ARIA hidden on red asterisk
#### Current
No `aria-hidden` on red asterisk, which means screen reader will read out asterisk as "star".
#### Requested update
Use `aria-hidden="true"` on red asterisk, then put the `required` attribute on the Input/Combobox/etc itself to programmatically mark the field as required. (This may warrant a discussion.)
#### Priority
Normal
| 1.0 | Label needs ARIA hidden on red asterisk - This was originally opened by @yvonne-chien-ms in #20139. Splitting into separate issues for each bug logged in the original issue.
### ARIA hidden on red asterisk
#### Current
No `aria-hidden` on red asterisk, which means screen reader will read out asterisk as "star".
#### Requested update
Use `aria-hidden="true"` on red asterisk, then put the `required` attribute on the Input/Combobox/etc itself to programmatically mark the field as required. (This may warrant a discussion.)
#### Priority
Normal
| priority | label needs aria hidden on red asterisk this was originally opened by yvonne chien ms in splitting into separate issues for each bug logged in the original issue aria hidden on red asterisk current no aria hidden on red asterisk which means screen reader will read out asterisk as star requested update use aria hidden true on red asterisk then put the required attribute on the input combobox etc itself to programmatically mark the field as required this may warrant a discussion priority normal | 1 |
135,132 | 5,242,727,798 | IssuesEvent | 2017-01-31 18:49:55 | nextgis/nextgisweb_compulink | https://api.github.com/repos/nextgis/nextgisweb_compulink | closed | Название полей в таблице ОС | enhancement High Priority | Добавить к наименованию двух полей «Начало/Окончание сдачи заказчику»
следующий текст «в эксплуатацию» в нижней панели:

| 1.0 | Название полей в таблице ОС - Добавить к наименованию двух полей «Начало/Окончание сдачи заказчику»
следующий текст «в эксплуатацию» в нижней панели:

| priority | название полей в таблице ос добавить к наименованию двух полей «начало окончание сдачи заказчику» следующий текст «в эксплуатацию» в нижней панели | 1 |
343,004 | 10,324,336,702 | IssuesEvent | 2019-09-01 08:12:55 | OpenSRP/opensrp-client-chw | https://api.github.com/repos/OpenSRP/opensrp-client-chw | closed | Change all French translations of "PCV" vaccine to "Pneumo" | enhancement high priority | We need to update the French translation of the PCV vaccine. The French translation should be changed to `Pneumo` in all instances in the French version of all the WCARO apps. | 1.0 | Change all French translations of "PCV" vaccine to "Pneumo" - We need to update the French translation of the PCV vaccine. The French translation should be changed to `Pneumo` in all instances in the French version of all the WCARO apps. | priority | change all french translations of pcv vaccine to pneumo we need to update the french translation of the pcv vaccine the french translation should be changed to pneumo in all instances in the french version of all the wcaro apps | 1 |
136,489 | 5,283,673,299 | IssuesEvent | 2017-02-07 21:59:39 | DCLP/dclpxsltbox | https://api.github.com/repos/DCLP/dclpxsltbox | closed | inconsistencies in biblScope markup not handled by XSLT | bug priority: high tweak XSLT | found by @leoba and split out from #146:
In Bibl files, if page numbers for journal articles are only in @from and @to values on biblScope and not in the content of biblScope, they do not display. Compare:
https://github.com/DCLP/idp.data/blob/master/Biblio/81/80929.xml#L7 (HTML: https://github.com/DCLP/dclpxsltbox/blob/master/output/dclp/65/64713.html#L46)
http://litpap.info/dclp/64713
with
https://github.com/DCLP/idp.data/blob/master/Biblio/66/65876.xml#L13
(HTML: https://github.com/DCLP/dclpxsltbox/blob/master/output/dclp/101/100149.html#L46)
http://litpap.info/dclp/100149
Is this an XSLT issue or an XML issue?
| 1.0 | inconsistencies in biblScope markup not handled by XSLT - found by @leoba and split out from #146:
In Bibl files, if page numbers for journal articles are only in @from and @to values on biblScope and not in the content of biblScope, they do not display. Compare:
https://github.com/DCLP/idp.data/blob/master/Biblio/81/80929.xml#L7 (HTML: https://github.com/DCLP/dclpxsltbox/blob/master/output/dclp/65/64713.html#L46)
http://litpap.info/dclp/64713
with
https://github.com/DCLP/idp.data/blob/master/Biblio/66/65876.xml#L13
(HTML: https://github.com/DCLP/dclpxsltbox/blob/master/output/dclp/101/100149.html#L46)
http://litpap.info/dclp/100149
Is this an XSLT issue or an XML issue?
| priority | inconsistencies in biblscope markup not handled by xslt found by leoba and split out from in bibl files if page numbers for journal articles are only in from and to values on biblscope and not in the content of biblscope they do not display compare html with html is this an xslt issue or an xml issue | 1 |
391,265 | 11,571,547,389 | IssuesEvent | 2020-02-20 21:48:07 | googlefonts/typography | https://api.github.com/repos/googlefonts/typography | closed | Get Google Analytics tracking code | high priority | Not sure who has to create this, but guessing we'll want to track it :) | 1.0 | Get Google Analytics tracking code - Not sure who has to create this, but guessing we'll want to track it :) | priority | get google analytics tracking code not sure who has to create this but guessing we ll want to track it | 1 |
280,642 | 8,684,407,629 | IssuesEvent | 2018-12-03 02:06:23 | AlexRaubach/ListFortress | https://api.github.com/repos/AlexRaubach/ListFortress | closed | Import Tournaments from Cryodex and Tabletop.to | High Priority enhancement | Need to build a tournament and participants from the incoming json. | 1.0 | Import Tournaments from Cryodex and Tabletop.to - Need to build a tournament and participants from the incoming json. | priority | import tournaments from cryodex and tabletop to need to build a tournament and participants from the incoming json | 1 |
534,376 | 15,615,370,887 | IssuesEvent | 2021-03-19 19:05:18 | ucb-rit/coldfront | https://api.github.com/repos/ucb-rit/coldfront | closed | Improvements to Home Pages | high priority |
Lets add these three buttons to the user HOME pages:
1) Review and Sign Cluster Access Agreement, keep this persistent.
Show the timestamp of when the user agreement has been signed. Both on the home page and also under the user profile page.
2) Join a Project.
(We will revisit this later for further enhancements) | 1.0 | Improvements to Home Pages -
Lets add these three buttons to the user HOME pages:
1) Review and Sign Cluster Access Agreement, keep this persistent.
Show the timestamp of when the user agreement has been signed. Both on the home page and also under the user profile page.
2) Join a Project.
(We will revisit this later for further enhancements) | priority | improvements to home pages lets add these three buttons to the user home pages review and sign cluster access agreement keep this persistent show the timestamp of when the user agreement has been signed both on the home page and also under the user profile page join a project we will revisit this later for further enhancements | 1 |
203,379 | 7,060,602,698 | IssuesEvent | 2018-01-05 09:32:34 | wso2-incubator/testgrid | https://api.github.com/repos/wso2-incubator/testgrid | closed | PROD environment dasboard points to dev api enpoints | Priority/Highest Severity/Critical Type/Bug | **Description:**
dashboard always calls API endpoints in dev environment. PROD dashboard should point to PROD API endpoints.
| 1.0 | PROD environment dasboard points to dev api enpoints - **Description:**
dashboard always calls API endpoints in dev environment. PROD dashboard should point to PROD API endpoints.
| priority | prod environment dasboard points to dev api enpoints description dashboard always calls api endpoints in dev environment prod dashboard should point to prod api endpoints | 1 |
137,273 | 5,301,278,874 | IssuesEvent | 2017-02-10 09:03:54 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Upgrade documentation to Jekyll | in progress Priority: High task | Documentation folder needs to be updated and upgraded to use Jekyll as documentation generator | 1.0 | Upgrade documentation to Jekyll - Documentation folder needs to be updated and upgraded to use Jekyll as documentation generator | priority | upgrade documentation to jekyll documentation folder needs to be updated and upgraded to use jekyll as documentation generator | 1 |
656,630 | 21,769,600,087 | IssuesEvent | 2022-05-13 07:43:50 | woocommerce/woocommerce-ios | https://api.github.com/repos/woocommerce/woocommerce-ios | closed | [Coupons] - Can't search for subcategories | type: bug priority: high feature: coupons | **Describe the bug**
On Edit Product Categories, looks like we can't search for subcategories, it will return results not found.
**To Reproduce**
Steps to reproduce the behavior:
1. Turn on Coupon Management experimental feature
2. Select a coupon
3. Edit the coupon code and go to Edit Product Categories
4. On search, search for a subcategory
5. See that it will show results not found
**Screenshots**
https://user-images.githubusercontent.com/17252150/168043287-b017cc32-346f-49c1-825e-5f122009317c.MP4
**Expected behavior**
We should be able to search for both categories and subcategories using the search bar
**Mobile Environment**
Device: iPhone 13
iOS version: 15.3.1
WooCommerce iOS version: AppCenter build pr6809-84ae456
| 1.0 | [Coupons] - Can't search for subcategories - **Describe the bug**
On Edit Product Categories, looks like we can't search for subcategories, it will return results not found.
**To Reproduce**
Steps to reproduce the behavior:
1. Turn on Coupon Management experimental feature
2. Select a coupon
3. Edit the coupon code and go to Edit Product Categories
4. On search, search for a subcategory
5. See that it will show results not found
**Screenshots**
https://user-images.githubusercontent.com/17252150/168043287-b017cc32-346f-49c1-825e-5f122009317c.MP4
**Expected behavior**
We should be able to search for both categories and subcategories using the search bar
**Mobile Environment**
Device: iPhone 13
iOS version: 15.3.1
WooCommerce iOS version: AppCenter build pr6809-84ae456
| priority | can t search for subcategories describe the bug on edit product categories looks like we can t search for subcategories it will return results not found to reproduce steps to reproduce the behavior turn on coupon management experimental feature select a coupon edit the coupon code and go to edit product categories on search search for a subcategory see that it will show results not found screenshots expected behavior we should be able to search for both categories and subcategories using the search bar mobile environment device iphone ios version woocommerce ios version appcenter build | 1 |
721,634 | 24,833,064,682 | IssuesEvent | 2022-10-26 06:23:09 | asobase0504/Team3 | https://api.github.com/repos/asobase0504/Team3 | closed | 操作要求ギミック基盤の作成 | program HighPriority | ### 必要Issues
### 概要
操作要求ギミックの基盤を作成
### 詳細
取り敢えず思いついた部分だけ
必要関数
- `継承予定の処理が求める操作方法(A連打させる。スティックの回転を要求する処理などを書く場所)`
- `完了条件(一定回数の連打だったり、スティックの回転時間だったり)`
- `完了したかどうかを知らせる`
必要変数
- `操作要求ギミックの中心地`
- `操作要求ギミック`
- `継承予定のギミックを列挙型で保存(自分が何のギミックかを知るために)`
- `完了したか否かのbool型`
| 1.0 | 操作要求ギミック基盤の作成 - ### 必要Issues
### 概要
操作要求ギミックの基盤を作成
### 詳細
取り敢えず思いついた部分だけ
必要関数
- `継承予定の処理が求める操作方法(A連打させる。スティックの回転を要求する処理などを書く場所)`
- `完了条件(一定回数の連打だったり、スティックの回転時間だったり)`
- `完了したかどうかを知らせる`
必要変数
- `操作要求ギミックの中心地`
- `操作要求ギミック`
- `継承予定のギミックを列挙型で保存(自分が何のギミックかを知るために)`
- `完了したか否かのbool型`
| priority | 操作要求ギミック基盤の作成 必要issues 概要 操作要求ギミックの基盤を作成 詳細 取り敢えず思いついた部分だけ 必要関数 継承予定の処理が求める操作方法 a連打させる。スティックの回転を要求する処理などを書く場所 完了条件 一定回数の連打だったり、スティックの回転時間だったり 完了したかどうかを知らせる 必要変数 操作要求ギミックの中心地 操作要求ギミック 継承予定のギミックを列挙型で保存 自分が何のギミックかを知るために 完了したか否かのbool型 | 1 |
149,687 | 5,723,697,633 | IssuesEvent | 2017-04-20 12:56:32 | AtChem/AtChem | https://api.github.com/repos/AtChem/AtChem | closed | Missing output given single species of interest | bug high priority | As mentioned in #34 :
lossRates.output and productionRates.output do not have any output if only one species is in lossRatesOutput.config and productionRatesOutput.config | 1.0 | Missing output given single species of interest - As mentioned in #34 :
lossRates.output and productionRates.output do not have any output if only one species is in lossRatesOutput.config and productionRatesOutput.config | priority | missing output given single species of interest as mentioned in lossrates output and productionrates output do not have any output if only one species is in lossratesoutput config and productionratesoutput config | 1 |
240,820 | 7,806,172,138 | IssuesEvent | 2018-06-11 13:20:59 | Coderockr/vitrine-social | https://api.github.com/repos/Coderockr/vitrine-social | closed | Pesquisa de Necessidades | Category: Backend Level: Medium Priority: Highest Stage: Review Type: New feature | ## Search [/search?text={text}&category={categories}&org={org}&page={page}]
### Fazer busca [GET]
+ Parameters
+ text: texto a ser pesquisado (string, optional) - Texto a ser pesquidado
+ categories: 1,2,3 (string, optional) - Ids das categorias
+ org: 1 (number, optional) - Id da entidade para filtrar
+ page: 1 (number, optional) - Paginação (vai trazer sempre 10 por página)
+ Request Busca das necessidades
+ Headers
Accept: application/json
Content-Type: application/json
+ Response 200 (application/json)
+ Attributes (array[BaseNeed])
| 1.0 | Pesquisa de Necessidades - ## Search [/search?text={text}&category={categories}&org={org}&page={page}]
### Fazer busca [GET]
+ Parameters
+ text: texto a ser pesquisado (string, optional) - Texto a ser pesquidado
+ categories: 1,2,3 (string, optional) - Ids das categorias
+ org: 1 (number, optional) - Id da entidade para filtrar
+ page: 1 (number, optional) - Paginação (vai trazer sempre 10 por página)
+ Request Busca das necessidades
+ Headers
Accept: application/json
Content-Type: application/json
+ Response 200 (application/json)
+ Attributes (array[BaseNeed])
| priority | pesquisa de necessidades search fazer busca parameters text texto a ser pesquisado string optional texto a ser pesquidado categories string optional ids das categorias org number optional id da entidade para filtrar page number optional paginação vai trazer sempre por página request busca das necessidades headers accept application json content type application json response application json attributes array | 1 |
160,414 | 6,088,874,374 | IssuesEvent | 2017-06-19 01:40:00 | elsevier-core-engineering/replicator | https://api.github.com/repos/elsevier-core-engineering/replicator | opened | AWS API Calls Are Forbidden When Replicator Is Running Within A Nomad Job | bug high-priority | **Description**
The `client` package must make a number of AWS API calls to effect scaling operations. To authorize these API calls, Replicator assumes the presence of an IAM instance profile.
When Replicator is running within the context of a Nomad job, it cannot obtain credentials from the host IAM instance profile. We need to add configuration parameters that allow credentials to be passed in and in ideal circumstances, these will be populated from Vault.
**Steps to reproduce the issue:**
1. Run Replicator as a Nomad job
2. Trigger a scaling operation by introducing or removing load from the cluster.
3. Within the Replicator logs, observe the scaling operations fail due to 403 responses from the AWS API.
**Describe the results you expected:**
Replicator should be able to perform all scaling operations and functions when running as a Nomad job.
**Describe the results you received:**
Replicator is unable to take any scaling actions or perform ancillary AWS API calls due to missing authentication information.
**Additional information you deem important (e.g. issue happens only occasionally):**
**Replicator version:**
```
./replicator-local version
Replicator v0.0.2-dev
```
**Additional environment details:**
Example Nomad job file:
```
job "replicator" {
datacenters = ["opsshared"]
region = "us-east"
type = "service"
update {
stagger = "10s"
max_parallel = 1
}
group "replicator" {
count = 1
task "replicator" {
driver = "exec"
constraint {
attribute = "${attr.kernel.name}"
value = "linux"
}
config {
command = "replicator"
args = [
"agent",
"-aws-region=us-east-1",
"-consul-token=<REDACTED>",
"-cluster-autoscaling-group=container-agent-dev",
"-log-level=DEBUG",
"-cluster-max-size=3",
"-cluster-min-size=1",
"-cluster-scaling-enabled=true"
]
}
artifact {
source = "https://s3.amazonaws.com/elsevier-tio-rap-opsshared-111045819866/replicator/dev/replicator"
}
resources {
cpu = 250
memory = 60
network {
mbits = 5
}
}
}
}
}
```
**Debug log outputs from replicator:**
```
[DEBUG] core/cluster_scaling: scale in requests 0 has reached threshold 3
[DEBUG] client/nomad: Node ec22f0a7-0fb7-8b4f-e0d5-26a7be9c1833 will be excluded when calculating eligible worker pool nodes to be removed
[INFO] client/nomad: node ec2ae9a5-aa37-62fe-6dca-4c697eb0d691 has been placed in drain mode
[DEBUG] client/consul: the Consul session d5385afa-f947-cb67-a7e0-2880126b54ad has been renewed
[INFO] client/nomad: node ec2ae9a5-aa37-62fe-6dca-4c697eb0d691 has no active allocations
[INFO] core/runner: terminating AWS instance 10.169.23.122
[ERROR] core/runner: unable to successfully terminate AWS instance ec2ae9a5-aa37-62fe-6dca-4c697eb0d691: AccessDenied: User: arn:aws:sts::111045819866:assumed-role/container-agent-dev-us-east-1/i-0596863fe85b14189 is not authorized to perform: autoscaling:DetachInstances on resource: arn:aws:autoscaling:us-east-1:111045819866:autoScalingGroup:cb401494-f21b-493d-880d-1805f235ec54:autoScalingGroupName/container-agent-dev
status code: 403, request id: 17991057-5485-11e7-9aca-1f93cfe7bd7a
[DEBUG] client/consul: the Consul session d5385afa-f947-cb67-a7e0-2880126b54ad has been renewed
``` | 1.0 | AWS API Calls Are Forbidden When Replicator Is Running Within A Nomad Job - **Description**
The `client` package must make a number of AWS API calls to effect scaling operations. To authorize these API calls, Replicator assumes the presence of an IAM instance profile.
When Replicator is running within the context of a Nomad job, it cannot obtain credentials from the host IAM instance profile. We need to add configuration parameters that allow credentials to be passed in and in ideal circumstances, these will be populated from Vault.
**Steps to reproduce the issue:**
1. Run Replicator as a Nomad job
2. Trigger a scaling operation by introducing or removing load from the cluster.
3. Within the Replicator logs, observe the scaling operations fail due to 403 responses from the AWS API.
**Describe the results you expected:**
Replicator should be able to perform all scaling operations and functions when running as a Nomad job.
**Describe the results you received:**
Replicator is unable to take any scaling actions or perform ancillary AWS API calls due to missing authentication information.
**Additional information you deem important (e.g. issue happens only occasionally):**
**Replicator version:**
```
./replicator-local version
Replicator v0.0.2-dev
```
**Additional environment details:**
Example Nomad job file:
```
job "replicator" {
datacenters = ["opsshared"]
region = "us-east"
type = "service"
update {
stagger = "10s"
max_parallel = 1
}
group "replicator" {
count = 1
task "replicator" {
driver = "exec"
constraint {
attribute = "${attr.kernel.name}"
value = "linux"
}
config {
command = "replicator"
args = [
"agent",
"-aws-region=us-east-1",
"-consul-token=<REDACTED>",
"-cluster-autoscaling-group=container-agent-dev",
"-log-level=DEBUG",
"-cluster-max-size=3",
"-cluster-min-size=1",
"-cluster-scaling-enabled=true"
]
}
artifact {
source = "https://s3.amazonaws.com/elsevier-tio-rap-opsshared-111045819866/replicator/dev/replicator"
}
resources {
cpu = 250
memory = 60
network {
mbits = 5
}
}
}
}
}
```
**Debug log outputs from replicator:**
```
[DEBUG] core/cluster_scaling: scale in requests 0 has reached threshold 3
[DEBUG] client/nomad: Node ec22f0a7-0fb7-8b4f-e0d5-26a7be9c1833 will be excluded when calculating eligible worker pool nodes to be removed
[INFO] client/nomad: node ec2ae9a5-aa37-62fe-6dca-4c697eb0d691 has been placed in drain mode
[DEBUG] client/consul: the Consul session d5385afa-f947-cb67-a7e0-2880126b54ad has been renewed
[INFO] client/nomad: node ec2ae9a5-aa37-62fe-6dca-4c697eb0d691 has no active allocations
[INFO] core/runner: terminating AWS instance 10.169.23.122
[ERROR] core/runner: unable to successfully terminate AWS instance ec2ae9a5-aa37-62fe-6dca-4c697eb0d691: AccessDenied: User: arn:aws:sts::111045819866:assumed-role/container-agent-dev-us-east-1/i-0596863fe85b14189 is not authorized to perform: autoscaling:DetachInstances on resource: arn:aws:autoscaling:us-east-1:111045819866:autoScalingGroup:cb401494-f21b-493d-880d-1805f235ec54:autoScalingGroupName/container-agent-dev
status code: 403, request id: 17991057-5485-11e7-9aca-1f93cfe7bd7a
[DEBUG] client/consul: the Consul session d5385afa-f947-cb67-a7e0-2880126b54ad has been renewed
``` | priority | aws api calls are forbidden when replicator is running within a nomad job description the client package must make a number of aws api calls to effect scaling operations to authorize these api calls replicator assumes the presence of an iam instance profile when replicator is running within the context of a nomad job it cannot obtain credentials from the host iam instance profile we need to add configuration parameters that allow credentials to be passed in and in ideal circumstances these will be populated from vault steps to reproduce the issue run replicator as a nomad job trigger a scaling operation by introducing or removing load from the cluster within the replicator logs observe the scaling operations fail due to responses from the aws api describe the results you expected replicator should be able to perform all scaling operations and functions when running as a nomad job describe the results you received replicator is unable to take any scaling actions or perform ancillary aws api calls due to missing authentication information additional information you deem important e g issue happens only occasionally replicator version replicator local version replicator dev additional environment details example nomad job file job replicator datacenters region us east type service update stagger max parallel group replicator count task replicator driver exec constraint attribute attr kernel name value linux config command replicator args agent aws region us east consul token cluster autoscaling group container agent dev log level debug cluster max size cluster min size cluster scaling enabled true artifact source resources cpu memory network mbits debug log outputs from replicator core cluster scaling scale in requests has reached threshold client nomad node will be excluded when calculating eligible worker pool nodes to be removed client nomad node has been placed in drain mode client consul the consul session has been renewed client nomad node has no active allocations core runner terminating aws instance core runner unable to successfully terminate aws instance accessdenied user arn aws sts assumed role container agent dev us east i is not authorized to perform autoscaling detachinstances on resource arn aws autoscaling us east autoscalinggroup autoscalinggroupname container agent dev status code request id client consul the consul session has been renewed | 1 |
618,122 | 19,425,616,784 | IssuesEvent | 2021-12-21 04:49:53 | NikolaiVChr/f16 | https://api.github.com/repos/NikolaiVChr/f16 | opened | New Radar System | BUG Enhancement Review or Check Semi-done High priority | **Bugs**
* #376
* [ ] DLINK symbol on non-dlink aircraft. Same MP person (but had switched aircraft to ATC)
* [ ] MFD OBS colors on b50
*
**Todo**
* [ ] Deploy
* [ ] GM tilt angles (needs serious thinking)
* [ ] Clicking A-G should set GM
* [ ] VSR switch speed at each bar instead of each frame | 1.0 | New Radar System - **Bugs**
* #376
* [ ] DLINK symbol on non-dlink aircraft. Same MP person (but had switched aircraft to ATC)
* [ ] MFD OBS colors on b50
*
**Todo**
* [ ] Deploy
* [ ] GM tilt angles (needs serious thinking)
* [ ] Clicking A-G should set GM
* [ ] VSR switch speed at each bar instead of each frame | priority | new radar system bugs dlink symbol on non dlink aircraft same mp person but had switched aircraft to atc mfd obs colors on todo deploy gm tilt angles needs serious thinking clicking a g should set gm vsr switch speed at each bar instead of each frame | 1 |
272,745 | 8,517,122,989 | IssuesEvent | 2018-11-01 06:31:59 | mapbox/mapbox-events-android | https://api.github.com/repos/mapbox/mapbox-events-android | closed | Root Cause of Missing Context | bug high priority | Investigate missing `MapboxTelemetry.applicationContext` when generating a location event.
- possible causes could be a race condition or odd background state
- solution may be to just drop events when context is not set
Reference Ticket: https://github.com/mapbox/mapbox-events-android/pull/144
cc: @zugaldia | 1.0 | Root Cause of Missing Context - Investigate missing `MapboxTelemetry.applicationContext` when generating a location event.
- possible causes could be a race condition or odd background state
- solution may be to just drop events when context is not set
Reference Ticket: https://github.com/mapbox/mapbox-events-android/pull/144
cc: @zugaldia | priority | root cause of missing context investigate missing mapboxtelemetry applicationcontext when generating a location event possible causes could be a race condition or odd background state solution may be to just drop events when context is not set reference ticket cc zugaldia | 1 |
368,959 | 10,886,599,555 | IssuesEvent | 2019-11-18 12:55:50 | canonical-web-and-design/ubuntu.com | https://api.github.com/repos/canonical-web-and-design/ubuntu.com | closed | Download page redirects to non-existing mirror URL | Priority: High | When using the download page for Ubuntu, I'm being systematically redirected to non-existing mirror URL and getting HTTP 404. Which will likely block a lot of people from downloading Ubuntu.
---
*Reported from: https://ubuntu.com/* | 1.0 | Download page redirects to non-existing mirror URL - When using the download page for Ubuntu, I'm being systematically redirected to non-existing mirror URL and getting HTTP 404. Which will likely block a lot of people from downloading Ubuntu.
---
*Reported from: https://ubuntu.com/* | priority | download page redirects to non existing mirror url when using the download page for ubuntu i m being systematically redirected to non existing mirror url and getting http which will likely block a lot of people from downloading ubuntu reported from | 1 |
117,465 | 4,716,567,138 | IssuesEvent | 2016-10-16 04:27:16 | CS2103AUG2016-W11-C3/main | https://api.github.com/repos/CS2103AUG2016-W11-C3/main | opened | User story: Specify own natural language | priority.high type.enhancement type.story | Description:
User can customize the app
Solution/Feature:
newCommand command? | 1.0 | User story: Specify own natural language - Description:
User can customize the app
Solution/Feature:
newCommand command? | priority | user story specify own natural language description user can customize the app solution feature newcommand command | 1 |
526,631 | 15,297,231,569 | IssuesEvent | 2021-02-24 08:07:39 | epam/ketcher | https://api.github.com/repos/epam/ketcher | closed | 'Calculated Values' for reaction doesn't work in Standalone mode. | bug external high priority | **Affected Version:** _Standalone_
**Preconditions:**
- Ketcher in Standalone mode is launched.
- [Reaction.zip](https://github.com/epam/ketcher/files/5942078/Reaction.zip) is opened or some reaction is already created on canvas.
**Steps to Reproduce:**
1. Click 'Calculated Values' button.
**Expected Result:**
Modal window is opened, following values are display in window:

**Actual Result:**
Modal window is opened with zero values, error message appears:

| 1.0 | 'Calculated Values' for reaction doesn't work in Standalone mode. - **Affected Version:** _Standalone_
**Preconditions:**
- Ketcher in Standalone mode is launched.
- [Reaction.zip](https://github.com/epam/ketcher/files/5942078/Reaction.zip) is opened or some reaction is already created on canvas.
**Steps to Reproduce:**
1. Click 'Calculated Values' button.
**Expected Result:**
Modal window is opened, following values are display in window:

**Actual Result:**
Modal window is opened with zero values, error message appears:

| priority | calculated values for reaction doesn t work in standalone mode affected version standalone preconditions ketcher in standalone mode is launched is opened or some reaction is already created on canvas steps to reproduce click calculated values button expected result modal window is opened following values are display in window actual result modal window is opened with zero values error message appears | 1 |
128,152 | 5,049,788,682 | IssuesEvent | 2016-12-20 16:48:05 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | Unit Tests Are Dynamically Linking Against Installed Libraries Rather Than The Newest Libraries | priority: high type: bug | # Problem Definition
When running a unit test, we often want it to dynamically linked against the _latest_ version of the library (located in `drake-distro/build/drake/...`), not necessarily the version that was _installed_, which may be older (located in `drake-distro/build/install/...`). A common use case is we want to instrument the Device Under Test (DUT) with temporary debug print statements. By linking against the installed library rather than the most recent library, these print statements will not show up, resulting in confusion.
# How to Replicate the Problem
The following was tested on an Ubuntu 14.04 system.
Starting from a clean build environment, execute the following:
```
cd drake-distro
mkdir build; cd build
cmake .. -G Ninja -DCMAKE_BUILD_TYPE:STRING=Debug
ninja
cd drake
ctest -VV -R package_map_test
```
You should see that the unit test `package_map_test` successfully ran.
Next, open `drake-distro/drake/multibody/parsers/package_map.cc` and modify method `Contains()` to be:
```
bool PackageMap::Contains(const string& package_name) const {
drake::log()->info("Checking package_name {}.", package_name);
return map_.find(package_name) != map_.end();
}
```
Note the newly added log statement, which results in a message being printed to the console.
Save and exit your text editor, then execute the following commands:
```
cd drake-distro/build/drake
ninja
ctest -VV -R package_map_test
```
You should see that the modification did not apply (nothing new is printed to the screen).
To resolve the problem, we need to build at the super-build level:
```
cd drake-distro/build
ninja
cd drake
ctest -VV -R package_map_test
```
You should not see that the newly added print statement results in a bunch of text being printed to the console. | 1.0 | Unit Tests Are Dynamically Linking Against Installed Libraries Rather Than The Newest Libraries - # Problem Definition
When running a unit test, we often want it to dynamically linked against the _latest_ version of the library (located in `drake-distro/build/drake/...`), not necessarily the version that was _installed_, which may be older (located in `drake-distro/build/install/...`). A common use case is we want to instrument the Device Under Test (DUT) with temporary debug print statements. By linking against the installed library rather than the most recent library, these print statements will not show up, resulting in confusion.
# How to Replicate the Problem
The following was tested on an Ubuntu 14.04 system.
Starting from a clean build environment, execute the following:
```
cd drake-distro
mkdir build; cd build
cmake .. -G Ninja -DCMAKE_BUILD_TYPE:STRING=Debug
ninja
cd drake
ctest -VV -R package_map_test
```
You should see that the unit test `package_map_test` successfully ran.
Next, open `drake-distro/drake/multibody/parsers/package_map.cc` and modify method `Contains()` to be:
```
bool PackageMap::Contains(const string& package_name) const {
drake::log()->info("Checking package_name {}.", package_name);
return map_.find(package_name) != map_.end();
}
```
Note the newly added log statement, which results in a message being printed to the console.
Save and exit your text editor, then execute the following commands:
```
cd drake-distro/build/drake
ninja
ctest -VV -R package_map_test
```
You should see that the modification did not apply (nothing new is printed to the screen).
To resolve the problem, we need to build at the super-build level:
```
cd drake-distro/build
ninja
cd drake
ctest -VV -R package_map_test
```
You should not see that the newly added print statement results in a bunch of text being printed to the console. | priority | unit tests are dynamically linking against installed libraries rather than the newest libraries problem definition when running a unit test we often want it to dynamically linked against the latest version of the library located in drake distro build drake not necessarily the version that was installed which may be older located in drake distro build install a common use case is we want to instrument the device under test dut with temporary debug print statements by linking against the installed library rather than the most recent library these print statements will not show up resulting in confusion how to replicate the problem the following was tested on an ubuntu system starting from a clean build environment execute the following cd drake distro mkdir build cd build cmake g ninja dcmake build type string debug ninja cd drake ctest vv r package map test you should see that the unit test package map test successfully ran next open drake distro drake multibody parsers package map cc and modify method contains to be bool packagemap contains const string package name const drake log info checking package name package name return map find package name map end note the newly added log statement which results in a message being printed to the console save and exit your text editor then execute the following commands cd drake distro build drake ninja ctest vv r package map test you should see that the modification did not apply nothing new is printed to the screen to resolve the problem we need to build at the super build level cd drake distro build ninja cd drake ctest vv r package map test you should not see that the newly added print statement results in a bunch of text being printed to the console | 1 |
802,040 | 28,588,468,252 | IssuesEvent | 2023-04-22 00:23:42 | ua-snap/iem-webapp | https://api.github.com/repos/ua-snap/iem-webapp | opened | Change beetle mini-map color scheme to be more colorblind friendly | high priority | A color scheme like this would be a big improvement:

| 1.0 | Change beetle mini-map color scheme to be more colorblind friendly - A color scheme like this would be a big improvement:

| priority | change beetle mini map color scheme to be more colorblind friendly a color scheme like this would be a big improvement | 1 |
636,140 | 20,593,090,309 | IssuesEvent | 2022-03-05 04:11:54 | gilhrpenner/COMP4350 | https://api.github.com/repos/gilhrpenner/COMP4350 | closed | Delete a listing | user story high priority | ## Description
As the owner of the listing\
I want to delete the listing\
so that no one else can see or make offers to it.
## Acceptance Criteria
- User must be logged in
- User must own the listing
- User must confirm a "warning" dialog indicating there is no coming back.
## Dev tasks
- [x] [Create frontend component to delete a listing](https://github.com/gilhrpenner/COMP4350/issues/125)
- [x] [Create backend endpoint to delete one or multiple listings](https://github.com/gilhrpenner/COMP4350/issues/98)
Story points: 3/5 | 1.0 | Delete a listing - ## Description
As the owner of the listing\
I want to delete the listing\
so that no one else can see or make offers to it.
## Acceptance Criteria
- User must be logged in
- User must own the listing
- User must confirm a "warning" dialog indicating there is no coming back.
## Dev tasks
- [x] [Create frontend component to delete a listing](https://github.com/gilhrpenner/COMP4350/issues/125)
- [x] [Create backend endpoint to delete one or multiple listings](https://github.com/gilhrpenner/COMP4350/issues/98)
Story points: 3/5 | priority | delete a listing description as the owner of the listing i want to delete the listing so that no one else can see or make offers to it acceptance criteria user must be logged in user must own the listing user must confirm a warning dialog indicating there is no coming back dev tasks story points | 1 |
494,256 | 14,247,471,726 | IssuesEvent | 2020-11-19 11:30:07 | geosolutions-it/C179-DBIAIT | https://api.github.com/repos/geosolutions-it/C179-DBIAIT | opened | Fix status FAILED in the process tasks | Priority: High bug | After adding the status for each table in imported layer, now the final status of a process task is FAILED.
But looking in the log the task work properly (SUCCESS). | 1.0 | Fix status FAILED in the process tasks - After adding the status for each table in imported layer, now the final status of a process task is FAILED.
But looking in the log the task work properly (SUCCESS). | priority | fix status failed in the process tasks after adding the status for each table in imported layer now the final status of a process task is failed but looking in the log the task work properly success | 1 |
641,068 | 20,817,083,158 | IssuesEvent | 2022-03-18 11:32:15 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | Cannot update Key Manager when switched from Token Exchange to Direct Token Type | Type/Bug Priority/Highest APIM - 4.1.0 | ### Description:
Steps to reproduce -
1. Add a key manager from the admin portal selecting only Token Exchange as Token Type.
2. Then edit it and change the token type to Direct and save.
An error is thrown in the console as follows.
```
ERROR - GlobalThrowableMapper Error while Retrieving Key Manager configuration for e4995c10-7927-4d32-853c-4e7f4d29f9ea in organization carbon.super
```
When trying to load the list of Key Managers in the Admin portal UI, the Screen shows following error.
<img width="1680" alt="Screenshot 2022-03-17 at 09 22 16" src="https://user-images.githubusercontent.com/8557410/158733610-027e4f34-f536-4506-a130-7e95459d2dbd.png">
The command line shows the below error.
```
[2022-03-17 09:17:15,068] ERROR - GlobalThrowableMapper An unknown exception has been captured by the global exception mapper.
java.lang.NullPointerException: null
at org.wso2.carbon.apimgt.impl.APIAdminImpl.setAliasForTokenExchangeKeyManagers_aroundBody32(APIAdminImpl.java:430) ~[org.wso2.carbon.apimgt.impl_9.20.28.SNAPSHOT.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.setAliasForTokenExchangeKeyManagers(APIAdminImpl.java:412) ~[org.wso2.carbon.apimgt.impl_9.20.28.SNAPSHOT.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.getKeyManagerConfigurationsByOrganization_aroundBody28(APIAdminImpl.java:375) ~[org.wso2.carbon.apimgt.impl_9.20.28.SNAPSHOT.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.getKeyManagerConfigurationsByOrganization(APIAdminImpl.java:333) ~[org.wso2.carbon.apimgt.impl_9.20.28.SNAPSHOT.jar:?]
at org.wso2.carbon.apimgt.rest.api.admin.v1.impl.KeyManagersApiServiceImpl.keyManagersGet(KeyManagersApiServiceImpl.java:73) ~[?:?]
at org.wso2.carbon.apimgt.rest.api.admin.v1.KeyManagersApi.keyManagersGet(KeyManagersApi.java:70) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_301]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_301]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_301]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_301]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[?:?]
```
The Devportal Key manager screens also do not load giving the above error.
There is no way of recovery unless a new pack is used.
| 1.0 | Cannot update Key Manager when switched from Token Exchange to Direct Token Type - ### Description:
Steps to reproduce -
1. Add a key manager from the admin portal selecting only Token Exchange as Token Type.
2. Then edit it and change the token type to Direct and save.
An error is thrown in the console as follows.
```
ERROR - GlobalThrowableMapper Error while Retrieving Key Manager configuration for e4995c10-7927-4d32-853c-4e7f4d29f9ea in organization carbon.super
```
When trying to load the list of Key Managers in the Admin portal UI, the Screen shows following error.
<img width="1680" alt="Screenshot 2022-03-17 at 09 22 16" src="https://user-images.githubusercontent.com/8557410/158733610-027e4f34-f536-4506-a130-7e95459d2dbd.png">
The command line shows the below error.
```
[2022-03-17 09:17:15,068] ERROR - GlobalThrowableMapper An unknown exception has been captured by the global exception mapper.
java.lang.NullPointerException: null
at org.wso2.carbon.apimgt.impl.APIAdminImpl.setAliasForTokenExchangeKeyManagers_aroundBody32(APIAdminImpl.java:430) ~[org.wso2.carbon.apimgt.impl_9.20.28.SNAPSHOT.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.setAliasForTokenExchangeKeyManagers(APIAdminImpl.java:412) ~[org.wso2.carbon.apimgt.impl_9.20.28.SNAPSHOT.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.getKeyManagerConfigurationsByOrganization_aroundBody28(APIAdminImpl.java:375) ~[org.wso2.carbon.apimgt.impl_9.20.28.SNAPSHOT.jar:?]
at org.wso2.carbon.apimgt.impl.APIAdminImpl.getKeyManagerConfigurationsByOrganization(APIAdminImpl.java:333) ~[org.wso2.carbon.apimgt.impl_9.20.28.SNAPSHOT.jar:?]
at org.wso2.carbon.apimgt.rest.api.admin.v1.impl.KeyManagersApiServiceImpl.keyManagersGet(KeyManagersApiServiceImpl.java:73) ~[?:?]
at org.wso2.carbon.apimgt.rest.api.admin.v1.KeyManagersApi.keyManagersGet(KeyManagersApi.java:70) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_301]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_301]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_301]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_301]
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) ~[?:?]
```
The Devportal Key manager screens also do not load giving the above error.
There is no way of recovery unless a new pack is used.
| priority | cannot update key manager when switched from token exchange to direct token type description steps to reproduce add a key manager from the admin portal selecting only token exchange as token type then edit it and change the token type to direct and save an error is thrown in the console as follows error globalthrowablemapper error while retrieving key manager configuration for in organization carbon super when trying to load the list of key managers in the admin portal ui the screen shows following error img width alt screenshot at src the command line shows the below error error globalthrowablemapper an unknown exception has been captured by the global exception mapper java lang nullpointerexception null at org carbon apimgt impl apiadminimpl setaliasfortokenexchangekeymanagers apiadminimpl java at org carbon apimgt impl apiadminimpl setaliasfortokenexchangekeymanagers apiadminimpl java at org carbon apimgt impl apiadminimpl getkeymanagerconfigurationsbyorganization apiadminimpl java at org carbon apimgt impl apiadminimpl getkeymanagerconfigurationsbyorganization apiadminimpl java at org carbon apimgt rest api admin impl keymanagersapiserviceimpl keymanagersget keymanagersapiserviceimpl java at org carbon apimgt rest api admin keymanagersapi keymanagersget keymanagersapi java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache cxf service invoker abstractinvoker performinvocation abstractinvoker java the devportal key manager screens also do not load giving the above error there is no way of recovery unless a new pack is used | 1 |
48,931 | 3,000,846,805 | IssuesEvent | 2015-07-24 06:42:42 | jayway/powermock | https://api.github.com/repos/jayway/powermock | closed | impossible to mock private methods with multiple arguments | bug imported Milestone-Release1.4 Priority-High | _From [schiz...@gmail.com](https://code.google.com/u/102447286385233918727/) on November 07, 2010 00:46:45_
When you try to mock private methods with multiple arguments via
any PowerMockito method using DefaultMethodExpectationSetup
you get
java.lang.IllegalArgumentException: wrong number of arguments
The problem is reproducible in 1.4.6 version.
It is caused by wrong arguments passed into method.invoke:
method.invoke(object, firstArgument, additionalArguments)
where an array of additional arguments is treated as single argument.
Instead it should be
method.invoke(object, firstAndAdditionalArgumentsArray)
See patch and test attached.
Mvh,
Stas Chizhov
**Attachment:** [DefaultMethodExpectationSetupTest.java mockito-api-method-exp-setup-fix.diff](http://code.google.com/p/powermock/issues/detail?id=289)
_Original issue: http://code.google.com/p/powermock/issues/detail?id=289_ | 1.0 | impossible to mock private methods with multiple arguments - _From [schiz...@gmail.com](https://code.google.com/u/102447286385233918727/) on November 07, 2010 00:46:45_
When you try to mock private methods with multiple arguments via
any PowerMockito method using DefaultMethodExpectationSetup
you get
java.lang.IllegalArgumentException: wrong number of arguments
The problem is reproducible in 1.4.6 version.
It is caused by wrong arguments passed into method.invoke:
method.invoke(object, firstArgument, additionalArguments)
where an array of additional arguments is treated as single argument.
Instead it should be
method.invoke(object, firstAndAdditionalArgumentsArray)
See patch and test attached.
Mvh,
Stas Chizhov
**Attachment:** [DefaultMethodExpectationSetupTest.java mockito-api-method-exp-setup-fix.diff](http://code.google.com/p/powermock/issues/detail?id=289)
_Original issue: http://code.google.com/p/powermock/issues/detail?id=289_ | priority | impossible to mock private methods with multiple arguments from on november when you try to mock private methods with multiple arguments via any powermockito method using defaultmethodexpectationsetup you get java lang illegalargumentexception wrong number of arguments the problem is reproducible in version it is caused by wrong arguments passed into method invoke method invoke object firstargument additionalarguments where an array of additional arguments is treated as single argument instead it should be method invoke object firstandadditionalargumentsarray see patch and test attached mvh stas chizhov attachment original issue | 1 |
104,011 | 4,188,393,931 | IssuesEvent | 2016-06-23 20:36:18 | dmusican/Elegit | https://api.github.com/repos/dmusican/Elegit | closed | Get Elegit started again on my own computer | bug priority high | Because it keeps throwing NullPointerExceptions.
I already tried clearing the prefs in com.apple.java.util.prefs as explained in
http://stackoverflow.com/questions/675864/where-are-java-preferences-stored-on-mac-os-x
and using Xcode to edit the pedit file,
but it won't even open without a NullPointerException. Get it started again, and document here what I had to do to recover. | 1.0 | Get Elegit started again on my own computer - Because it keeps throwing NullPointerExceptions.
I already tried clearing the prefs in com.apple.java.util.prefs as explained in
http://stackoverflow.com/questions/675864/where-are-java-preferences-stored-on-mac-os-x
and using Xcode to edit the pedit file,
but it won't even open without a NullPointerException. Get it started again, and document here what I had to do to recover. | priority | get elegit started again on my own computer because it keeps throwing nullpointerexceptions i already tried clearing the prefs in com apple java util prefs as explained in and using xcode to edit the pedit file but it won t even open without a nullpointerexception get it started again and document here what i had to do to recover | 1 |
638,657 | 20,733,713,277 | IssuesEvent | 2022-03-14 11:50:23 | bitfoundation/bitframework | https://api.github.com/repos/bitfoundation/bitframework | closed | Improve `BitFileUpload` component demo page | area / components feature high priority | improve the `BitFileUpload` demo page by introducing new samples for the new features added to this component. | 1.0 | Improve `BitFileUpload` component demo page - improve the `BitFileUpload` demo page by introducing new samples for the new features added to this component. | priority | improve bitfileupload component demo page improve the bitfileupload demo page by introducing new samples for the new features added to this component | 1 |
689,890 | 23,639,082,963 | IssuesEvent | 2022-08-25 15:28:03 | opendatahub-io/odh-dashboard | https://api.github.com/repos/opendatahub-io/odh-dashboard | closed | [KFNBC] Telemetry event "Notebook Server Started" not sent when spawning notebooks with kubeflow | kind/bug feature/notebook-controller priority/high | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
In RHODS 1.15, the telemetry event "Notebook Server Started" (introduced in RHODS-863) is not sent when launching notebooks. it's not working. neither when using JupyterHub or Kubeflow. Other telemetry events are being sent properly
This is a regression, as for 1.14 this event is sent properly when using JupyterHub
### Expected Behavior
The event should be sent if "Usage Data Colletion" is enabled
### Steps To Reproduce
1. Enable "Usage Data Collection" in Dashboard > Cluster Settings
2. Reload Dashboard just in case
3. Launch a juputer notebook using kubeflow
4. Connect to the telemetry service and verify if the event "Notebook Server Started" was sent (more info at RHODS-863)
### Workaround (if any)
_No response_
### OpenShift Infrastructure Version
_No response_
### Openshift Version
ODS, OpenShift 4.11
### What browsers are you seeing the problem on?
Chrome
### Open Data Hub Version
```yml
RHODS 1.15
```
### Relevant log output
_No response_ | 1.0 | [KFNBC] Telemetry event "Notebook Server Started" not sent when spawning notebooks with kubeflow - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
In RHODS 1.15, the telemetry event "Notebook Server Started" (introduced in RHODS-863) is not sent when launching notebooks. it's not working. neither when using JupyterHub or Kubeflow. Other telemetry events are being sent properly
This is a regression, as for 1.14 this event is sent properly when using JupyterHub
### Expected Behavior
The event should be sent if "Usage Data Colletion" is enabled
### Steps To Reproduce
1. Enable "Usage Data Collection" in Dashboard > Cluster Settings
2. Reload Dashboard just in case
3. Launch a juputer notebook using kubeflow
4. Connect to the telemetry service and verify if the event "Notebook Server Started" was sent (more info at RHODS-863)
### Workaround (if any)
_No response_
### OpenShift Infrastructure Version
_No response_
### Openshift Version
ODS, OpenShift 4.11
### What browsers are you seeing the problem on?
Chrome
### Open Data Hub Version
```yml
RHODS 1.15
```
### Relevant log output
_No response_ | priority | telemetry event notebook server started not sent when spawning notebooks with kubeflow is there an existing issue for this i have searched the existing issues current behavior in rhods the telemetry event notebook server started introduced in rhods is not sent when launching notebooks it s not working neither when using jupyterhub or kubeflow other telemetry events are being sent properly this is a regression as for this event is sent properly when using jupyterhub expected behavior the event should be sent if usage data colletion is enabled steps to reproduce enable usage data collection in dashboard cluster settings reload dashboard just in case launch a juputer notebook using kubeflow connect to the telemetry service and verify if the event notebook server started was sent more info at rhods workaround if any no response openshift infrastructure version no response openshift version ods openshift what browsers are you seeing the problem on chrome open data hub version yml rhods relevant log output no response | 1 |
816,965 | 30,619,623,656 | IssuesEvent | 2023-07-24 07:18:29 | dumpus-app/dumpus-app | https://api.github.com/repos/dumpus-app/dumpus-app | closed | Hide Android & Desktop in setup if the app is running on iOS | bug high priority iOS | We need to hide the button for the other platforms if the app is running on iOS in the onboarding setup (because Apple refuses it when submitting 🙂)
**we should only do this for iOS. and we should keep the three buttons iOS, Android, Desktop if the app is running on something else**

| 1.0 | Hide Android & Desktop in setup if the app is running on iOS - We need to hide the button for the other platforms if the app is running on iOS in the onboarding setup (because Apple refuses it when submitting 🙂)
**we should only do this for iOS. and we should keep the three buttons iOS, Android, Desktop if the app is running on something else**

| priority | hide android desktop in setup if the app is running on ios we need to hide the button for the other platforms if the app is running on ios in the onboarding setup because apple refuses it when submitting 🙂 we should only do this for ios and we should keep the three buttons ios android desktop if the app is running on something else | 1 |
633,113 | 20,245,251,183 | IssuesEvent | 2022-02-14 13:09:58 | Scan-o-Matic/scanomatic-standalone | https://api.github.com/repos/Scan-o-Matic/scanomatic-standalone | opened | Validate Epson V850 | high priority | # 📋 Info
Make sure it is compatible with SoM
# 🏁 DoD
- [ ] Can perform scan in transparency mode using the same dimensions and resolution
```
SCAN_MODES.TPU: {
SCAN_FLAGS.Source: "Transparency",
SCAN_FLAGS.Format: "tiff",
SCAN_FLAGS.Resolution: "600",
SCAN_FLAGS.Mode: "Gray",
SCAN_FLAGS.Left: "0",
SCAN_FLAGS.Top: "0",
SCAN_FLAGS.Width: "203.2",
SCAN_FLAGS.Height: "254",
SCAN_FLAGS.Depth: "8",
},
```
- [ ] Can be made so that it doesn't turn itself off or goes into sleep-mode / always accessible computer over the range of days.
(Previous hardware hack forces power button to always be pressed)
| 1.0 | Validate Epson V850 - # 📋 Info
Make sure it is compatible with SoM
# 🏁 DoD
- [ ] Can perform scan in transparency mode using the same dimensions and resolution
```
SCAN_MODES.TPU: {
SCAN_FLAGS.Source: "Transparency",
SCAN_FLAGS.Format: "tiff",
SCAN_FLAGS.Resolution: "600",
SCAN_FLAGS.Mode: "Gray",
SCAN_FLAGS.Left: "0",
SCAN_FLAGS.Top: "0",
SCAN_FLAGS.Width: "203.2",
SCAN_FLAGS.Height: "254",
SCAN_FLAGS.Depth: "8",
},
```
- [ ] Can be made so that it doesn't turn itself off or goes into sleep-mode / always accessible computer over the range of days.
(Previous hardware hack forces power button to always be pressed)
| priority | validate epson 📋 info make sure it is compatible with som 🏁 dod can perform scan in transparency mode using the same dimensions and resolution scan modes tpu scan flags source transparency scan flags format tiff scan flags resolution scan flags mode gray scan flags left scan flags top scan flags width scan flags height scan flags depth can be made so that it doesn t turn itself off or goes into sleep mode always accessible computer over the range of days previous hardware hack forces power button to always be pressed | 1 |
647,022 | 21,088,789,310 | IssuesEvent | 2022-04-04 00:41:41 | PlaceOS/PlaceOS | https://api.github.com/repos/PlaceOS/PlaceOS | closed | Enable release channels | focus: devops priority: high meta | Define a GitHub Workflow that controls promotion of versioned builds to [release channels](https://github.com/PlaceOS/PlaceOS#channels).
This must _not_ build any new release assets, but should retag those previously released under the versioned build to be promoted.
Workflow should be triggered by a manual interaction, following completion on any non-automated test processes. | 1.0 | Enable release channels - Define a GitHub Workflow that controls promotion of versioned builds to [release channels](https://github.com/PlaceOS/PlaceOS#channels).
This must _not_ build any new release assets, but should retag those previously released under the versioned build to be promoted.
Workflow should be triggered by a manual interaction, following completion on any non-automated test processes. | priority | enable release channels define a github workflow that controls promotion of versioned builds to this must not build any new release assets but should retag those previously released under the versioned build to be promoted workflow should be triggered by a manual interaction following completion on any non automated test processes | 1 |
626,383 | 19,809,573,490 | IssuesEvent | 2022-01-19 10:41:33 | HHS81/c182s | https://api.github.com/repos/HHS81/c182s | closed | Glareshield light: Flaps and trim wheels error | bug effects high priority textures | Something is weird in master with the glareshield light on:

| 1.0 | Glareshield light: Flaps and trim wheels error - Something is weird in master with the glareshield light on:

| priority | glareshield light flaps and trim wheels error something is weird in master with the glareshield light on | 1 |
280,881 | 8,687,900,874 | IssuesEvent | 2018-12-03 14:54:23 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | Uploads for a custom model issue | priority: high status: have to reproduce type: bug 🐛 | **Informations**
- **Node.js version**: v10.13.0
- **npm version**: v6.4.1
- **Strapi version**: 3.0.0-alpha.14.5
- **Database**: MongoDB 4.0.4
- **Operating system**: McaOS 10.14.1
**What is the current behavior?**
I create new content type Post and set 2 fields: title and images. The title defines as a String and Images defines as Media with multiple file uploading available. When I tried to upload files to the new saved Post I used the following API endpoint "localhost:1337/upload" and my Postman looks:

And I got a response with newly uploaded files and I can see it in my local directory "public/uploads". But when I requested my Post there are no images, just empty array.
My model looks like this:

And API response like this:

**Steps to reproduce the problem**
1. Create new Content type via the admin dashboard
2. Create 2 fields "title: String" and "images: Media" with multiple files uploading checkbox
3. Create new Post record using Postman
4. Upload files using new created Post
5. Files saved but there is nothing in "images" filed of Post
**What is the expected behavior?**
Uploaded files should appear in "images" field
**Suggested solutions**
No
Any help will be very appreciated. Also, same behavior if I set images field as a single Media file. | 1.0 | Uploads for a custom model issue - **Informations**
- **Node.js version**: v10.13.0
- **npm version**: v6.4.1
- **Strapi version**: 3.0.0-alpha.14.5
- **Database**: MongoDB 4.0.4
- **Operating system**: McaOS 10.14.1
**What is the current behavior?**
I create new content type Post and set 2 fields: title and images. The title defines as a String and Images defines as Media with multiple file uploading available. When I tried to upload files to the new saved Post I used the following API endpoint "localhost:1337/upload" and my Postman looks:

And I got a response with newly uploaded files and I can see it in my local directory "public/uploads". But when I requested my Post there are no images, just empty array.
My model looks like this:

And API response like this:

**Steps to reproduce the problem**
1. Create new Content type via the admin dashboard
2. Create 2 fields "title: String" and "images: Media" with multiple files uploading checkbox
3. Create new Post record using Postman
4. Upload files using new created Post
5. Files saved but there is nothing in "images" filed of Post
**What is the expected behavior?**
Uploaded files should appear in "images" field
**Suggested solutions**
No
Any help will be very appreciated. Also, same behavior if I set images field as a single Media file. | priority | uploads for a custom model issue informations node js version npm version strapi version alpha database mongodb operating system mcaos what is the current behavior i create new content type post and set fields title and images the title defines as a string and images defines as media with multiple file uploading available when i tried to upload files to the new saved post i used the following api endpoint localhost upload and my postman looks and i got a response with newly uploaded files and i can see it in my local directory public uploads but when i requested my post there are no images just empty array my model looks like this and api response like this steps to reproduce the problem create new content type via the admin dashboard create fields title string and images media with multiple files uploading checkbox create new post record using postman upload files using new created post files saved but there is nothing in images filed of post what is the expected behavior uploaded files should appear in images field suggested solutions no any help will be very appreciated also same behavior if i set images field as a single media file | 1 |
225,067 | 7,477,561,749 | IssuesEvent | 2018-04-04 08:43:27 | meetalva/alva | https://api.github.com/repos/meetalva/alva | closed | As a designer I want to add a placeholder component | priority: high status: has PR type: feature | As a designer I want to add a placeholder component so I can easily experiment with new component designs on a page.
Acceptance criterias:
1) I can add placeholder image (e.g. PNG, JPG) to a page
2) I can arrange the placeholder in the same way as other components (except properties, of course)
3) I can delete the placeholder
Note: We could internally handle the placeholder images with a image component of the component library
The placeholder component should support the following events:
* click
* hover
* TODO | 1.0 | As a designer I want to add a placeholder component - As a designer I want to add a placeholder component so I can easily experiment with new component designs on a page.
Acceptance criterias:
1) I can add placeholder image (e.g. PNG, JPG) to a page
2) I can arrange the placeholder in the same way as other components (except properties, of course)
3) I can delete the placeholder
Note: We could internally handle the placeholder images with a image component of the component library
The placeholder component should support the following events:
* click
* hover
* TODO | priority | as a designer i want to add a placeholder component as a designer i want to add a placeholder component so i can easily experiment with new component designs on a page acceptance criterias i can add placeholder image e g png jpg to a page i can arrange the placeholder in the same way as other components except properties of course i can delete the placeholder note we could internally handle the placeholder images with a image component of the component library the placeholder component should support the following events click hover todo | 1 |
595,457 | 18,067,470,768 | IssuesEvent | 2021-09-20 20:59:46 | OpenMandrivaAssociation/test2 | https://api.github.com/repos/OpenMandrivaAssociation/test2 | closed | mcc -> XFdrake error (Bugzilla Bug 9) | bug high priority major | This issue was created automatically with bugzilla2github
# Bugzilla Bug 9
Date: 2013-06-22 22:06:04 +0000
From: luca <<agoiza@gmail.com>>
To: OpenMandriva QA <<bugs@openmandriva.org>>
CC: @benbullard79, @christann404, @itchka, @robxu9
Last updated: 2013-06-25 22:26:43 +0000
## Comment 24
Date: 2013-06-22 22:06:04 +0000
From: luca <<agoiza@gmail.com>>
imported vbox image(good job!) and updated
when i try to change X server config in mcc:
============================================================================
il programma "XFdrake" si è interrotto(SEGV) per questo errore:
SEGV
Glibc's trace:
4: /usr/lib/libperl.so.5.16(Perl_sighandler+0x1e2) [0xb7614e72]
5: /usr/lib/libperl.so.5.16(Perl_csighandler+0x99) [0xb76106b9]
6: [0xffffe400]
7: /lib/libstdc++.so.6(_ZNSsC1ERKSs+0x1a) [0xb6dfa96a]
8: /usr/lib/libldetect.so.0.13(_ZN7ldetect3dmi5probeEv+0x969) [0xb6e3bc9d]
9: /usr/lib/perl5/vendor_perl/5.16.3/i386-linux-thread-multi/auto/LDetect/LDetect.so(+0x1
c25) [0xb6e5cc25]
10: /usr/lib/libperl.so.5.16(Perl_pp_entersub+0x602) [0xb762eca2]
11: /usr/lib/libperl.so.5.16(Perl_runops_standard+0x18) [0xb7626b78]
12: /usr/lib/libperl.so.5.16(perl_run+0x2d9) [0xb75ba0a9]
13: /usr/bin/perl() [0x8048a03]
14: /lib/i686/libc.so.6(__libc_start_main+0xf5) [0xb73cfa85]
15: /usr/bin/perl() [0x8048a35]
Perl's trace:
standalone::bug_handler() called from /usr/lib/libDrakX/standalone.pm:225
standalone::__ANON__() called from /usr/lib/libDrakX/detect_devices.pm:971
(eval)() called from /usr/lib/libDrakX/detect_devices.pm:971
detect_devices::dmi_probe() called from /usr/lib/libDrakX/detect_devices.pm:981
detect_devices::probeall() called from /usr/lib/libDrakX/detect_devices.pm:442
detect_devices::probe_category() called from /usr/lib/libDrakX/mouse.pm:293
mouse::detect() called from /usr/lib/libDrakX/Xconfig/default.pm:18
Xconfig::default::configure() called from /usr/lib/libDrakX/Xconfig/main.pm:189
Xconfig::main::configure_everything_or_configure_chooser() called from /usr/sbin/XFdrake:
48
============================================================================
besides mcc try to connect with bugzilla Mandriva and not with Bugzilla OpenMandriva
Luca.
## Comment 25
Date: 2013-06-22 22:16:29 +0000
From: @robxu9
(In reply to comment #0)
> imported vbox image(good job!) and updated
>
> when i try to change X server config in mcc:
>
> ============================================================================
> il programma "XFdrake" si è interrotto(SEGV) per questo errore:
>
> SEGV
> Glibc's trace:
> 4: /usr/lib/libperl.so.5.16(Perl_sighandler+0x1e2) [0xb7614e72]
> 5: /usr/lib/libperl.so.5.16(Perl_csighandler+0x99) [0xb76106b9]
> 6: [0xffffe400]
> 7: /lib/libstdc++.so.6(_ZNSsC1ERKSs+0x1a) [0xb6dfa96a]
> 8: /usr/lib/libldetect.so.0.13(_ZN7ldetect3dmi5probeEv+0x969) [0xb6e3bc9d]
> 9:
> /usr/lib/perl5/vendor_perl/5.16.3/i386-linux-thread-multi/auto/LDetect/
> LDetect.so(+0x1
> c25) [0xb6e5cc25]
> 10: /usr/lib/libperl.so.5.16(Perl_pp_entersub+0x602) [0xb762eca2]
> 11: /usr/lib/libperl.so.5.16(Perl_runops_standard+0x18) [0xb7626b78]
> 12: /usr/lib/libperl.so.5.16(perl_run+0x2d9) [0xb75ba0a9]
> 13: /usr/bin/perl() [0x8048a03]
> 14: /lib/i686/libc.so.6(__libc_start_main+0xf5) [0xb73cfa85]
> 15: /usr/bin/perl() [0x8048a35]
>
> Perl's trace:
> standalone::bug_handler() called from /usr/lib/libDrakX/standalone.pm:225
> standalone::__ANON__() called from /usr/lib/libDrakX/detect_devices.pm:971
> (eval)() called from /usr/lib/libDrakX/detect_devices.pm:971
> detect_devices::dmi_probe() called from
> /usr/lib/libDrakX/detect_devices.pm:981
> detect_devices::probeall() called from
> /usr/lib/libDrakX/detect_devices.pm:442
> detect_devices::probe_category() called from /usr/lib/libDrakX/mouse.pm:293
> mouse::detect() called from /usr/lib/libDrakX/Xconfig/default.pm:18
> Xconfig::default::configure() called from
> /usr/lib/libDrakX/Xconfig/main.pm:189
> Xconfig::main::configure_everything_or_configure_chooser() called from
> /usr/sbin/XFdrake:
> 48
> ============================================================================
>
This might be due to the recent perl rebuild - can you try updating the system again and see if this happens?
>
> besides mcc try to connect with bugzilla Mandriva and not with Bugzilla
> OpenMandriva
This has been fixed and should be pushed soon.
## Comment 26
Date: 2013-06-22 22:40:33 +0000
From: luca <<agoiza@gmail.com>>
i had the error with latest packages.
## Comment 27
Date: 2013-06-23 07:09:25 +0000
From: @benbullard79
I'm seeing something similar in a fully updated OM LX 2013 x86_64 system:
The "XFdrake" program has segfaulted with the following error:
SEGV
Glibc's trace:
4: /usr/lib64/libperl.so.5.16(Perl_sighandler+0x1dd) [0x7fb40cb9a05d]
5: /lib64/libc.so.6(+0x35ea0) [0x7fb40c6f5ea0]
6: /lib64/libstdc++.so.6(_ZNSsC1ERKSs+0x8) [0x7fb40a50e908]
7: /usr/lib64/libldetect.so.0.13(_ZN7ldetect3dmi5probeEv+0x99a) [0x7fb40a5627d6]
8: /usr/lib/perl5/vendor_perl/5.16.3/x86_64-linux-thread-multi/auto/LDetect/LDetect.so(+0x232a) [0x7fb40caa232a]
9: /usr/lib64/libperl.so.5.16(Perl_pp_entersub+0x5ea) [0x7fb40cbb2d6a]
10: /usr/lib64/libperl.so.5.16(Perl_runops_standard+0x16) [0x7fb40cbab426]
11: /usr/lib64/libperl.so.5.16(Perl_call_sv+0x49b) [0x7fb40cb413ab]
12: /usr/lib/perl5/vendor_perl/5.16.3/x86_64-linux-thread-multi/auto/Glib/Glib.so(+0x23b6d) [0x7fb409ffeb6d]
13: /lib64/libgobject-2.0.so.0(g_closure_invoke+0x190) [0x7fb409f9c0b0]
14: /lib64/libgobject-2.0.so.0(+0x21990) [0x7fb409fad990]
15: /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0xe81) [0x7fb409fb57a1]
16: /lib64/libgobject-2.0.so.0(g_signal_emit+0x82) [0x7fb409fb5a22]
17: /usr/lib64/libgtk-x11-2.0.so.0(+0x8e665) [0x7fb409186665]
18: /lib64/libgobject-2.0.so.0(+0x10377) [0x7fb409f9c377]
19: /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x44f) [0x7fb409fb4d6f]
Perl's trace:
standalone::bug_handler() called from /usr/lib/libDrakX/standalone.pm:225
standalone::__ANON__() called from /usr/lib/libDrakX/detect_devices.pm:971
(eval)() called from /usr/lib/libDrakX/detect_devices.pm:971
detect_devices::dmi_probe() called from /usr/lib/libDrakX/detect_devices.pm:981
detect_devices::probeall() called from /usr/lib/libDrakX/detect_devices.pm:993
detect_devices::matching_driver__regexp() called from /usr/lib/libDrakX/Xconfig/card.pm:93
Xconfig::card::probe() called from /usr/lib/libDrakX/Xconfig/card.pm:278
Xconfig::card::configure() called from /usr/lib/libDrakX/Xconfig/main.pm:131
Xconfig::main::__ANON__() called from /usr/lib/libDrakX/interactive.pm:431
interactive::__ANON__() called from /usr/lib/libDrakX/interactive/gtk.pm:725
interactive::gtk::__ANON__() called from /usr/lib/libDrakX/mygtk2.pm:1443
(eval)() called from /usr/lib/libDrakX/mygtk2.pm:1443
mygtk2::main() called from /usr/lib/libDrakX/ugtk2.pm:767
ugtk2::main() called from /usr/lib/libDrakX/interactive/gtk.pm:900
interactive::gtk::ask_fromW() called from /usr/lib/libDrakX/interactive.pm:534
interactive::ask_from_real() called from /usr/lib/libDrakX/interactive.pm:522
interactive::ask_from_() called from /usr/lib/libDrakX/Xconfig/main.pm:152
Xconfig::main::configure_chooser_raw() called from /usr/lib/libDrakX/Xconfig/main.pm:165
Xconfig::main::configure_chooser() called from /usr/lib/libDrakX/Xconfig/main.pm:188
Xconfig::main::configure_everything_or_configure_chooser() called from /usr/sbin/XFdrake:48
Used theme: elementary
To submit a bug report, click on the report button.
This will open a web browser window on Bugzilla where you'll find a form to fill in.
The information displayed above will be transferred to that server
It would be very useful to attach to your report the output of the following command: 'lspcidrake -v'.
Trying to run XFdrake to install driver for:
$ lspcidrake -v
...
nvidia_current,nvidia173,nvidia71xx,nvidia96xx,nouveau:
NVIDIA Corporation|C51PV [GeForce 6150] [DISPLAY_VGA]
(vendor:10de device:0240 subv:1043 subd:81cd) (rev: a2)
...
FWIW I don't think any of the listed drivers will work with
this hardware. 'nouveau' and 'nv' do not work on this box. It is going to need nvidia 304.88 which we don't have in openMandriva LX 2013. That is what is used and works in ROSA Fresh, Fedora 18, and openSuSE 12.3. Also that's what nVidia website says:
http://www.nvidia.com/object/linux-display-amd64-304.88-driver.html
Neither 319.23 nor 310.xx drivers support GeForce 6 products (including GeForce 6150). I don't see a 310.44 even in their archives.
The 304.88 is the latest driver I know of that works with this hardware.
The 304.88 driver from nVidia website will work in Cooker on this box.
## Comment 31
Date: 2013-06-23 08:42:23 +0000
From: @benbullard79
Luca could you post output of 'lspcidrake -v'? I'm wondering if this issue may be related to specific hardware?
Also when does this happen for you. For me XFdrake opens then seg faults when I select 'Graphic Card' hence my wondering if hardware related as this is the hardware detection step.
## Comment 32
Date: 2013-06-23 08:58:57 +0000
From: luca <<agoiza@gmail.com>>
Hi Ben,
i'm testing Oma in Vbox. i'm using a prepared vbox image.
=====================================================================
lspcidrake -v
unknown : Intel Corporation|82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] [STORAGE_SATA] (vendor:8086 device:2829) (rev: 02)
i2c_piix4 : Intel Corporation|82371AB/EB/MB PIIX4 ACPI [BRIDGE_OTHER] (vendor:8086 device:7113) (rev: 08)
ohci_hcd : Apple Inc.|KeyLargo/Intrepid USB [SERIAL_USB] (vendor:106b device:003f)
snd_intel8x0 : Intel Corporation|82801AA AC'97 Audio Controller [MULTIMEDIA_AUDIO] (vendor:8086 device:2415 subv:8086 subd:0000) (rev: 01)
vboxadd,vboxguest: InnoTek Systemberatung GmbH|VirtualBox Guest Service [SYSTEM_OTHER] (vendor:80ee device:cafe)
e1000 : Intel Corporation|82540EM Gigabit Ethernet Controller [NETWORK_ETHERNET] (vendor:8086 device:100e subv:8086 subd:001e) (rev: 02)
Card:VirtualBox virtual video card: InnoTek Systemberatung GmbH|VirtualBox Graphics Adapter [DISPLAY_VGA] (vendor:80ee device:beef)
ata_piix,pata_acpi,ata_generic,piix,ide_pci_generic: Intel Corporation|82371AB/EB/MB PIIX4 IDE [STORAGE_IDE] (vendor:8086 device:7111) (rev: 01)
unknown : Intel Corporation|82371SB PIIX3 ISA [Natoma/Triton II] [BRIDGE_ISA] (vendor:8086 device:7000)
unknown : Intel Corporation|440FX - 82441FX PMC [Natoma] [BRIDGE_HOST] (vendor:8086 device:1237) (rev: 02)
xpad,usbhid : VirtualBox|USB Tablet [Human Interface Device|No Subclass|None]
usbcore : Linux Foundation|1.1 root hub [Hub|Unused|Full speed (or root) hub]
Errore di segmentazione (core dumped)
==========================================================================
:-(
XFdrake doesn't open. i have the same problem if mcc -> harddrake2 or mcc -> draksound
## Comment 33
Date: 2013-06-23 09:33:14 +0000
From: @benbullard79
OK. As far as the graphics issue then this is over my head.
Regarding MCC do you mean you get seg fault when opening that or running from cli mcc or drakconf? Because I don't have issuses with those.
## Comment 34
Date: 2013-06-23 11:26:02 +0000
From: @christann404
Hello,
I am seeing the same behavior, but I am also running on a Virtual Box. Running drakconf, print config also crashes.
## Comment 43
Date: 2013-06-24 09:49:22 +0000
From: @itchka
Updated both my installations on Sunday evening. Update included ldetect.so and some perl ldetect modules. Everything seems to be working now.
I'm marking this bug as resolved.
## Comment 44
Date: 2013-06-24 12:10:11 +0000
From: @benbullard79
Yeah, th e seg fault problem is fixed but XFdrake darn sure ain't working correctly on this computer. I'll file a new bug report when I have more time.
OT: Why ain't spell checking working here?
## Comment 45
Date: 2013-06-24 12:10:36 +0000
From: @benbullard79
Yeah, th e seg fault problem is fixed but XFdrake darn sure ain't working correctly on this computer. I'll file a new bug report when I have more time.
## Comment 46
Date: 2013-06-24 12:50:19 +0000
From: @itchka
Ben, I've tested XFdrake on hardware and on virtual machines and it works ok for me.
Must be something odd with your hardware. Are you trying to use proprietry drivers?
Probably best to file a new bug
## Comment 50
Date: 2013-06-24 13:26:29 +0000
From: @benbullard79
Yes Colin I believe you are correct. It appears to be hardware related. To begin with OM 2013 LX does not have a version of nvidia driver that will work with my hardware. And that is a bit odd as any other distro I use does so quite easily. So yes new bug report when I have time to do so properly and diligently.
## Comment 100
Date: 2013-06-25 22:26:43 +0000
From: @robxu9
[this is a platform change to ensure correctness. if the platform is not correct after this change, please fix it. possible valid values are armv7hl_tgr2, armv7nhl_omp3, i586, x86_64]
| 1.0 | mcc -> XFdrake error (Bugzilla Bug 9) - This issue was created automatically with bugzilla2github
# Bugzilla Bug 9
Date: 2013-06-22 22:06:04 +0000
From: luca <<agoiza@gmail.com>>
To: OpenMandriva QA <<bugs@openmandriva.org>>
CC: @benbullard79, @christann404, @itchka, @robxu9
Last updated: 2013-06-25 22:26:43 +0000
## Comment 24
Date: 2013-06-22 22:06:04 +0000
From: luca <<agoiza@gmail.com>>
imported vbox image(good job!) and updated
when i try to change X server config in mcc:
============================================================================
il programma "XFdrake" si è interrotto(SEGV) per questo errore:
SEGV
Glibc's trace:
4: /usr/lib/libperl.so.5.16(Perl_sighandler+0x1e2) [0xb7614e72]
5: /usr/lib/libperl.so.5.16(Perl_csighandler+0x99) [0xb76106b9]
6: [0xffffe400]
7: /lib/libstdc++.so.6(_ZNSsC1ERKSs+0x1a) [0xb6dfa96a]
8: /usr/lib/libldetect.so.0.13(_ZN7ldetect3dmi5probeEv+0x969) [0xb6e3bc9d]
9: /usr/lib/perl5/vendor_perl/5.16.3/i386-linux-thread-multi/auto/LDetect/LDetect.so(+0x1
c25) [0xb6e5cc25]
10: /usr/lib/libperl.so.5.16(Perl_pp_entersub+0x602) [0xb762eca2]
11: /usr/lib/libperl.so.5.16(Perl_runops_standard+0x18) [0xb7626b78]
12: /usr/lib/libperl.so.5.16(perl_run+0x2d9) [0xb75ba0a9]
13: /usr/bin/perl() [0x8048a03]
14: /lib/i686/libc.so.6(__libc_start_main+0xf5) [0xb73cfa85]
15: /usr/bin/perl() [0x8048a35]
Perl's trace:
standalone::bug_handler() called from /usr/lib/libDrakX/standalone.pm:225
standalone::__ANON__() called from /usr/lib/libDrakX/detect_devices.pm:971
(eval)() called from /usr/lib/libDrakX/detect_devices.pm:971
detect_devices::dmi_probe() called from /usr/lib/libDrakX/detect_devices.pm:981
detect_devices::probeall() called from /usr/lib/libDrakX/detect_devices.pm:442
detect_devices::probe_category() called from /usr/lib/libDrakX/mouse.pm:293
mouse::detect() called from /usr/lib/libDrakX/Xconfig/default.pm:18
Xconfig::default::configure() called from /usr/lib/libDrakX/Xconfig/main.pm:189
Xconfig::main::configure_everything_or_configure_chooser() called from /usr/sbin/XFdrake:
48
============================================================================
besides mcc try to connect with bugzilla Mandriva and not with Bugzilla OpenMandriva
Luca.
## Comment 25
Date: 2013-06-22 22:16:29 +0000
From: @robxu9
(In reply to comment #0)
> imported vbox image(good job!) and updated
>
> when i try to change X server config in mcc:
>
> ============================================================================
> il programma "XFdrake" si è interrotto(SEGV) per questo errore:
>
> SEGV
> Glibc's trace:
> 4: /usr/lib/libperl.so.5.16(Perl_sighandler+0x1e2) [0xb7614e72]
> 5: /usr/lib/libperl.so.5.16(Perl_csighandler+0x99) [0xb76106b9]
> 6: [0xffffe400]
> 7: /lib/libstdc++.so.6(_ZNSsC1ERKSs+0x1a) [0xb6dfa96a]
> 8: /usr/lib/libldetect.so.0.13(_ZN7ldetect3dmi5probeEv+0x969) [0xb6e3bc9d]
> 9:
> /usr/lib/perl5/vendor_perl/5.16.3/i386-linux-thread-multi/auto/LDetect/
> LDetect.so(+0x1
> c25) [0xb6e5cc25]
> 10: /usr/lib/libperl.so.5.16(Perl_pp_entersub+0x602) [0xb762eca2]
> 11: /usr/lib/libperl.so.5.16(Perl_runops_standard+0x18) [0xb7626b78]
> 12: /usr/lib/libperl.so.5.16(perl_run+0x2d9) [0xb75ba0a9]
> 13: /usr/bin/perl() [0x8048a03]
> 14: /lib/i686/libc.so.6(__libc_start_main+0xf5) [0xb73cfa85]
> 15: /usr/bin/perl() [0x8048a35]
>
> Perl's trace:
> standalone::bug_handler() called from /usr/lib/libDrakX/standalone.pm:225
> standalone::__ANON__() called from /usr/lib/libDrakX/detect_devices.pm:971
> (eval)() called from /usr/lib/libDrakX/detect_devices.pm:971
> detect_devices::dmi_probe() called from
> /usr/lib/libDrakX/detect_devices.pm:981
> detect_devices::probeall() called from
> /usr/lib/libDrakX/detect_devices.pm:442
> detect_devices::probe_category() called from /usr/lib/libDrakX/mouse.pm:293
> mouse::detect() called from /usr/lib/libDrakX/Xconfig/default.pm:18
> Xconfig::default::configure() called from
> /usr/lib/libDrakX/Xconfig/main.pm:189
> Xconfig::main::configure_everything_or_configure_chooser() called from
> /usr/sbin/XFdrake:
> 48
> ============================================================================
>
This might be due to the recent perl rebuild - can you try updating the system again and see if this happens?
>
> besides mcc try to connect with bugzilla Mandriva and not with Bugzilla
> OpenMandriva
This has been fixed and should be pushed soon.
## Comment 26
Date: 2013-06-22 22:40:33 +0000
From: luca <<agoiza@gmail.com>>
i had the error with latest packages.
## Comment 27
Date: 2013-06-23 07:09:25 +0000
From: @benbullard79
I'm seeing something similar in a fully updated OM LX 2013 x86_64 system:
The "XFdrake" program has segfaulted with the following error:
SEGV
Glibc's trace:
4: /usr/lib64/libperl.so.5.16(Perl_sighandler+0x1dd) [0x7fb40cb9a05d]
5: /lib64/libc.so.6(+0x35ea0) [0x7fb40c6f5ea0]
6: /lib64/libstdc++.so.6(_ZNSsC1ERKSs+0x8) [0x7fb40a50e908]
7: /usr/lib64/libldetect.so.0.13(_ZN7ldetect3dmi5probeEv+0x99a) [0x7fb40a5627d6]
8: /usr/lib/perl5/vendor_perl/5.16.3/x86_64-linux-thread-multi/auto/LDetect/LDetect.so(+0x232a) [0x7fb40caa232a]
9: /usr/lib64/libperl.so.5.16(Perl_pp_entersub+0x5ea) [0x7fb40cbb2d6a]
10: /usr/lib64/libperl.so.5.16(Perl_runops_standard+0x16) [0x7fb40cbab426]
11: /usr/lib64/libperl.so.5.16(Perl_call_sv+0x49b) [0x7fb40cb413ab]
12: /usr/lib/perl5/vendor_perl/5.16.3/x86_64-linux-thread-multi/auto/Glib/Glib.so(+0x23b6d) [0x7fb409ffeb6d]
13: /lib64/libgobject-2.0.so.0(g_closure_invoke+0x190) [0x7fb409f9c0b0]
14: /lib64/libgobject-2.0.so.0(+0x21990) [0x7fb409fad990]
15: /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0xe81) [0x7fb409fb57a1]
16: /lib64/libgobject-2.0.so.0(g_signal_emit+0x82) [0x7fb409fb5a22]
17: /usr/lib64/libgtk-x11-2.0.so.0(+0x8e665) [0x7fb409186665]
18: /lib64/libgobject-2.0.so.0(+0x10377) [0x7fb409f9c377]
19: /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x44f) [0x7fb409fb4d6f]
Perl's trace:
standalone::bug_handler() called from /usr/lib/libDrakX/standalone.pm:225
standalone::__ANON__() called from /usr/lib/libDrakX/detect_devices.pm:971
(eval)() called from /usr/lib/libDrakX/detect_devices.pm:971
detect_devices::dmi_probe() called from /usr/lib/libDrakX/detect_devices.pm:981
detect_devices::probeall() called from /usr/lib/libDrakX/detect_devices.pm:993
detect_devices::matching_driver__regexp() called from /usr/lib/libDrakX/Xconfig/card.pm:93
Xconfig::card::probe() called from /usr/lib/libDrakX/Xconfig/card.pm:278
Xconfig::card::configure() called from /usr/lib/libDrakX/Xconfig/main.pm:131
Xconfig::main::__ANON__() called from /usr/lib/libDrakX/interactive.pm:431
interactive::__ANON__() called from /usr/lib/libDrakX/interactive/gtk.pm:725
interactive::gtk::__ANON__() called from /usr/lib/libDrakX/mygtk2.pm:1443
(eval)() called from /usr/lib/libDrakX/mygtk2.pm:1443
mygtk2::main() called from /usr/lib/libDrakX/ugtk2.pm:767
ugtk2::main() called from /usr/lib/libDrakX/interactive/gtk.pm:900
interactive::gtk::ask_fromW() called from /usr/lib/libDrakX/interactive.pm:534
interactive::ask_from_real() called from /usr/lib/libDrakX/interactive.pm:522
interactive::ask_from_() called from /usr/lib/libDrakX/Xconfig/main.pm:152
Xconfig::main::configure_chooser_raw() called from /usr/lib/libDrakX/Xconfig/main.pm:165
Xconfig::main::configure_chooser() called from /usr/lib/libDrakX/Xconfig/main.pm:188
Xconfig::main::configure_everything_or_configure_chooser() called from /usr/sbin/XFdrake:48
Used theme: elementary
To submit a bug report, click on the report button.
This will open a web browser window on Bugzilla where you'll find a form to fill in.
The information displayed above will be transferred to that server
It would be very useful to attach to your report the output of the following command: 'lspcidrake -v'.
Trying to run XFdrake to install driver for:
$ lspcidrake -v
...
nvidia_current,nvidia173,nvidia71xx,nvidia96xx,nouveau:
NVIDIA Corporation|C51PV [GeForce 6150] [DISPLAY_VGA]
(vendor:10de device:0240 subv:1043 subd:81cd) (rev: a2)
...
FWIW I don't think any of the listed drivers will work with
this hardware. 'nouveau' and 'nv' do not work on this box. It is going to need nvidia 304.88 which we don't have in openMandriva LX 2013. That is what is used and works in ROSA Fresh, Fedora 18, and openSuSE 12.3. Also that's what nVidia website says:
http://www.nvidia.com/object/linux-display-amd64-304.88-driver.html
Neither 319.23 nor 310.xx drivers support GeForce 6 products (including GeForce 6150). I don't see a 310.44 even in their archives.
The 304.88 is the latest driver I know of that works with this hardware.
The 304.88 driver from nVidia website will work in Cooker on this box.
## Comment 31
Date: 2013-06-23 08:42:23 +0000
From: @benbullard79
Luca could you post output of 'lspcidrake -v'? I'm wondering if this issue may be related to specific hardware?
Also when does this happen for you. For me XFdrake opens then seg faults when I select 'Graphic Card' hence my wondering if hardware related as this is the hardware detection step.
## Comment 32
Date: 2013-06-23 08:58:57 +0000
From: luca <<agoiza@gmail.com>>
Hi Ben,
i'm testing Oma in Vbox. i'm using a prepared vbox image.
=====================================================================
lspcidrake -v
unknown : Intel Corporation|82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] [STORAGE_SATA] (vendor:8086 device:2829) (rev: 02)
i2c_piix4 : Intel Corporation|82371AB/EB/MB PIIX4 ACPI [BRIDGE_OTHER] (vendor:8086 device:7113) (rev: 08)
ohci_hcd : Apple Inc.|KeyLargo/Intrepid USB [SERIAL_USB] (vendor:106b device:003f)
snd_intel8x0 : Intel Corporation|82801AA AC'97 Audio Controller [MULTIMEDIA_AUDIO] (vendor:8086 device:2415 subv:8086 subd:0000) (rev: 01)
vboxadd,vboxguest: InnoTek Systemberatung GmbH|VirtualBox Guest Service [SYSTEM_OTHER] (vendor:80ee device:cafe)
e1000 : Intel Corporation|82540EM Gigabit Ethernet Controller [NETWORK_ETHERNET] (vendor:8086 device:100e subv:8086 subd:001e) (rev: 02)
Card:VirtualBox virtual video card: InnoTek Systemberatung GmbH|VirtualBox Graphics Adapter [DISPLAY_VGA] (vendor:80ee device:beef)
ata_piix,pata_acpi,ata_generic,piix,ide_pci_generic: Intel Corporation|82371AB/EB/MB PIIX4 IDE [STORAGE_IDE] (vendor:8086 device:7111) (rev: 01)
unknown : Intel Corporation|82371SB PIIX3 ISA [Natoma/Triton II] [BRIDGE_ISA] (vendor:8086 device:7000)
unknown : Intel Corporation|440FX - 82441FX PMC [Natoma] [BRIDGE_HOST] (vendor:8086 device:1237) (rev: 02)
xpad,usbhid : VirtualBox|USB Tablet [Human Interface Device|No Subclass|None]
usbcore : Linux Foundation|1.1 root hub [Hub|Unused|Full speed (or root) hub]
Errore di segmentazione (core dumped)
==========================================================================
:-(
XFdrake doesn't open. i have the same problem if mcc -> harddrake2 or mcc -> draksound
## Comment 33
Date: 2013-06-23 09:33:14 +0000
From: @benbullard79
OK. As far as the graphics issue then this is over my head.
Regarding MCC do you mean you get seg fault when opening that or running from cli mcc or drakconf? Because I don't have issuses with those.
## Comment 34
Date: 2013-06-23 11:26:02 +0000
From: @christann404
Hello,
I am seeing the same behavior, but I am also running on a Virtual Box. Running drakconf, print config also crashes.
## Comment 43
Date: 2013-06-24 09:49:22 +0000
From: @itchka
Updated both my installations on Sunday evening. Update included ldetect.so and some perl ldetect modules. Everything seems to be working now.
I'm marking this bug as resolved.
## Comment 44
Date: 2013-06-24 12:10:11 +0000
From: @benbullard79
Yeah, th e seg fault problem is fixed but XFdrake darn sure ain't working correctly on this computer. I'll file a new bug report when I have more time.
OT: Why ain't spell checking working here?
## Comment 45
Date: 2013-06-24 12:10:36 +0000
From: @benbullard79
Yeah, th e seg fault problem is fixed but XFdrake darn sure ain't working correctly on this computer. I'll file a new bug report when I have more time.
## Comment 46
Date: 2013-06-24 12:50:19 +0000
From: @itchka
Ben, I've tested XFdrake on hardware and on virtual machines and it works ok for me.
Must be something odd with your hardware. Are you trying to use proprietry drivers?
Probably best to file a new bug
## Comment 50
Date: 2013-06-24 13:26:29 +0000
From: @benbullard79
Yes Colin I believe you are correct. It appears to be hardware related. To begin with OM 2013 LX does not have a version of nvidia driver that will work with my hardware. And that is a bit odd as any other distro I use does so quite easily. So yes new bug report when I have time to do so properly and diligently.
## Comment 100
Date: 2013-06-25 22:26:43 +0000
From: @robxu9
[this is a platform change to ensure correctness. if the platform is not correct after this change, please fix it. possible valid values are armv7hl_tgr2, armv7nhl_omp3, i586, x86_64]
| priority | mcc xfdrake error bugzilla bug this issue was created automatically with bugzilla bug date from luca lt gt to openmandriva qa lt gt cc itchka last updated comment date from luca lt gt imported vbox image good job and updated when i try to change x server config in mcc il programma xfdrake si è interrotto segv per questo errore segv glibc s trace usr lib libperl so perl sighandler usr lib libperl so perl csighandler lib libstdc so usr lib libldetect so usr lib vendor perl linux thread multi auto ldetect ldetect so usr lib libperl so perl pp entersub usr lib libperl so perl runops standard usr lib libperl so perl run usr bin perl lib libc so libc start main usr bin perl perl s trace standalone bug handler called from usr lib libdrakx standalone pm standalone anon called from usr lib libdrakx detect devices pm eval called from usr lib libdrakx detect devices pm detect devices dmi probe called from usr lib libdrakx detect devices pm detect devices probeall called from usr lib libdrakx detect devices pm detect devices probe category called from usr lib libdrakx mouse pm mouse detect called from usr lib libdrakx xconfig default pm xconfig default configure called from usr lib libdrakx xconfig main pm xconfig main configure everything or configure chooser called from usr sbin xfdrake besides mcc try to connect with bugzilla mandriva and not with bugzilla openmandriva luca comment date from in reply to comment imported vbox image good job and updated when i try to change x server config in mcc il programma xfdrake si è interrotto segv per questo errore segv glibc s trace usr lib libperl so perl sighandler usr lib libperl so perl csighandler lib libstdc so usr lib libldetect so usr lib vendor perl linux thread multi auto ldetect ldetect so usr lib libperl so perl pp entersub usr lib libperl so perl runops standard usr lib libperl so perl run usr bin perl lib libc so libc start main usr bin perl perl s trace standalone bug handler called from usr lib libdrakx standalone pm standalone anon called from usr lib libdrakx detect devices pm eval called from usr lib libdrakx detect devices pm detect devices dmi probe called from usr lib libdrakx detect devices pm detect devices probeall called from usr lib libdrakx detect devices pm detect devices probe category called from usr lib libdrakx mouse pm mouse detect called from usr lib libdrakx xconfig default pm xconfig default configure called from usr lib libdrakx xconfig main pm xconfig main configure everything or configure chooser called from usr sbin xfdrake this might be due to the recent perl rebuild can you try updating the system again and see if this happens besides mcc try to connect with bugzilla mandriva and not with bugzilla openmandriva this has been fixed and should be pushed soon comment date from luca lt gt i had the error with latest packages comment date from i m seeing something similar in a fully updated om lx system the xfdrake program has segfaulted with the following error segv glibc s trace usr libperl so perl sighandler libc so libstdc so usr libldetect so usr lib vendor perl linux thread multi auto ldetect ldetect so usr libperl so perl pp entersub usr libperl so perl runops standard usr libperl so perl call sv usr lib vendor perl linux thread multi auto glib glib so libgobject so g closure invoke libgobject so libgobject so g signal emit valist libgobject so g signal emit usr libgtk so libgobject so libgobject so g signal emit valist perl s trace standalone bug handler called from usr lib libdrakx standalone pm standalone anon called from usr lib libdrakx detect devices pm eval called from usr lib libdrakx detect devices pm detect devices dmi probe called from usr lib libdrakx detect devices pm detect devices probeall called from usr lib libdrakx detect devices pm detect devices matching driver regexp called from usr lib libdrakx xconfig card pm xconfig card probe called from usr lib libdrakx xconfig card pm xconfig card configure called from usr lib libdrakx xconfig main pm xconfig main anon called from usr lib libdrakx interactive pm interactive anon called from usr lib libdrakx interactive gtk pm interactive gtk anon called from usr lib libdrakx pm eval called from usr lib libdrakx pm main called from usr lib libdrakx pm main called from usr lib libdrakx interactive gtk pm interactive gtk ask fromw called from usr lib libdrakx interactive pm interactive ask from real called from usr lib libdrakx interactive pm interactive ask from called from usr lib libdrakx xconfig main pm xconfig main configure chooser raw called from usr lib libdrakx xconfig main pm xconfig main configure chooser called from usr lib libdrakx xconfig main pm xconfig main configure everything or configure chooser called from usr sbin xfdrake used theme elementary to submit a bug report click on the report button this will open a web browser window on bugzilla where you ll find a form to fill in the information displayed above will be transferred to that server it would be very useful to attach to your report the output of the following command lspcidrake v trying to run xfdrake to install driver for lspcidrake v nvidia current nouveau nvidia corporation vendor device subv subd rev fwiw i don t think any of the listed drivers will work with this hardware nouveau and nv do not work on this box it is going to need nvidia which we don t have in openmandriva lx that is what is used and works in rosa fresh fedora and opensuse also that s what nvidia website says neither nor xx drivers support geforce products including geforce i don t see a even in their archives the is the latest driver i know of that works with this hardware the driver from nvidia website will work in cooker on this box comment date from luca could you post output of lspcidrake v i m wondering if this issue may be related to specific hardware also when does this happen for you for me xfdrake opens then seg faults when i select graphic card hence my wondering if hardware related as this is the hardware detection step comment date from luca lt gt hi ben i m testing oma in vbox i m using a prepared vbox image lspcidrake v unknown intel corporation hem e sata controller vendor device rev intel corporation eb mb acpi vendor device rev ohci hcd apple inc keylargo intrepid usb vendor device snd intel corporation ac audio controller vendor device subv subd rev vboxadd vboxguest innotek systemberatung gmbh virtualbox guest service vendor device cafe intel corporation gigabit ethernet controller vendor device subv subd rev card virtualbox virtual video card innotek systemberatung gmbh virtualbox graphics adapter vendor device beef ata piix pata acpi ata generic piix ide pci generic intel corporation eb mb ide vendor device rev unknown intel corporation isa vendor device unknown intel corporation pmc vendor device rev xpad usbhid virtualbox usb tablet usbcore linux foundation root hub errore di segmentazione core dumped xfdrake doesn t open i have the same problem if mcc or mcc draksound comment date from ok as far as the graphics issue then this is over my head regarding mcc do you mean you get seg fault when opening that or running from cli mcc or drakconf because i don t have issuses with those comment date from hello i am seeing the same behavior but i am also running on a virtual box running drakconf print config also crashes comment date from itchka updated both my installations on sunday evening update included ldetect so and some perl ldetect modules everything seems to be working now i m marking this bug as resolved comment date from yeah th e seg fault problem is fixed but xfdrake darn sure ain t working correctly on this computer i ll file a new bug report when i have more time ot why ain t spell checking working here comment date from yeah th e seg fault problem is fixed but xfdrake darn sure ain t working correctly on this computer i ll file a new bug report when i have more time comment date from itchka ben i ve tested xfdrake on hardware and on virtual machines and it works ok for me must be something odd with your hardware are you trying to use proprietry drivers probably best to file a new bug comment date from yes colin i believe you are correct it appears to be hardware related to begin with om lx does not have a version of nvidia driver that will work with my hardware and that is a bit odd as any other distro i use does so quite easily so yes new bug report when i have time to do so properly and diligently comment date from | 1 |
216,776 | 7,311,502,214 | IssuesEvent | 2018-02-28 17:57:31 | IUNetSci/hoaxy-botometer | https://api.github.com/repos/IUNetSci/hoaxy-botometer | closed | Change styling of dashboard and search | High Priority | - [x] Make the column titles be: trending news, popular claims, popular fact checks
- [x] Order the search before the dashboard
- [x] (Doesn't seem necessary to do) Put more padding between the search and the dashboard
- [x] Make text in dashboard more compact: create zero spacing between title and source amongst other things
- [x] (Tried and looks very ugly/distracting) Use monospace url, and use more distinguishing and attractive font styles
- [x] Make the column title stand out from the dashboard table content
- [x] Left justify header title
- [x] Get rid of the blue row color to only contain white color
- [x] Remove border from columns | 1.0 | Change styling of dashboard and search - - [x] Make the column titles be: trending news, popular claims, popular fact checks
- [x] Order the search before the dashboard
- [x] (Doesn't seem necessary to do) Put more padding between the search and the dashboard
- [x] Make text in dashboard more compact: create zero spacing between title and source amongst other things
- [x] (Tried and looks very ugly/distracting) Use monospace url, and use more distinguishing and attractive font styles
- [x] Make the column title stand out from the dashboard table content
- [x] Left justify header title
- [x] Get rid of the blue row color to only contain white color
- [x] Remove border from columns | priority | change styling of dashboard and search make the column titles be trending news popular claims popular fact checks order the search before the dashboard doesn t seem necessary to do put more padding between the search and the dashboard make text in dashboard more compact create zero spacing between title and source amongst other things tried and looks very ugly distracting use monospace url and use more distinguishing and attractive font styles make the column title stand out from the dashboard table content left justify header title get rid of the blue row color to only contain white color remove border from columns | 1 |
227,946 | 7,544,620,921 | IssuesEvent | 2018-04-17 19:00:22 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: Crafting stations, storages, carts and furniture disappears | High Priority | **Version:** 0.7.3.0 beta
**Steps to Reproduce:**
My crafting stations, storages, stockpiles, wood carts and furniture disappears after login. It was during mid game and only in a certain area like a special section.
All new added objects like crafting stations, storages, stockpiles, wood carts and furnitures will disappear. Also quarried raw materials like coal, iron, copper, gold and stone will disappear after using a pick axe.
dirt, sand, tailings, wood or building materials will NOT disappear in this area.
Anyhow, the vanished objects will be shown on the map allready. But they are not focusable or able to interact with.
The crafting processes doestn't stop. Also the storages and the items saved in the storages will be available over an indirect way through a not vanished storage outside this area.
The game is pretty much not playable since this problem occurs.
**Expected behavior:**
Crafting stations, storages, carts and furniture are visible and interactable.
**Actual behavior:**
Object are invisible, vanished or not really reachable.
**Attachments:**


| 1.0 | USER ISSUE: Crafting stations, storages, carts and furniture disappears - **Version:** 0.7.3.0 beta
**Steps to Reproduce:**
My crafting stations, storages, stockpiles, wood carts and furniture disappears after login. It was during mid game and only in a certain area like a special section.
All new added objects like crafting stations, storages, stockpiles, wood carts and furnitures will disappear. Also quarried raw materials like coal, iron, copper, gold and stone will disappear after using a pick axe.
dirt, sand, tailings, wood or building materials will NOT disappear in this area.
Anyhow, the vanished objects will be shown on the map allready. But they are not focusable or able to interact with.
The crafting processes doestn't stop. Also the storages and the items saved in the storages will be available over an indirect way through a not vanished storage outside this area.
The game is pretty much not playable since this problem occurs.
**Expected behavior:**
Crafting stations, storages, carts and furniture are visible and interactable.
**Actual behavior:**
Object are invisible, vanished or not really reachable.
**Attachments:**


| priority | user issue crafting stations storages carts and furniture disappears version beta steps to reproduce my crafting stations storages stockpiles wood carts and furniture disappears after login it was during mid game and only in a certain area like a special section all new added objects like crafting stations storages stockpiles wood carts and furnitures will disappear also quarried raw materials like coal iron copper gold and stone will disappear after using a pick axe dirt sand tailings wood or building materials will not disappear in this area anyhow the vanished objects will be shown on the map allready but they are not focusable or able to interact with the crafting processes doestn t stop also the storages and the items saved in the storages will be available over an indirect way through a not vanished storage outside this area the game is pretty much not playable since this problem occurs expected behavior crafting stations storages carts and furniture are visible and interactable actual behavior object are invisible vanished or not really reachable attachments | 1 |
308,533 | 9,440,304,859 | IssuesEvent | 2019-04-14 16:54:53 | cs2113-ay1819s2-m11-3/main | https://api.github.com/repos/cs2113-ay1819s2-m11-3/main | closed | As a user, I want the system to remove unnecessary flash cards automatically. | priority.High type.Enhancement | - The system should detect the deadline assigned to the flash cards and remove it automatically. | 1.0 | As a user, I want the system to remove unnecessary flash cards automatically. - - The system should detect the deadline assigned to the flash cards and remove it automatically. | priority | as a user i want the system to remove unnecessary flash cards automatically the system should detect the deadline assigned to the flash cards and remove it automatically | 1 |
552,228 | 16,218,744,136 | IssuesEvent | 2021-05-06 01:06:02 | bookwyrm-social/bookwyrm | https://api.github.com/repos/bookwyrm-social/bookwyrm | opened | Date picker breaks start/finish reading modals on mobile | UI bug high priority | The modal is invisibly covering the submit buttons, and never displays | 1.0 | Date picker breaks start/finish reading modals on mobile - The modal is invisibly covering the submit buttons, and never displays | priority | date picker breaks start finish reading modals on mobile the modal is invisibly covering the submit buttons and never displays | 1 |
541,715 | 15,832,275,986 | IssuesEvent | 2021-04-06 14:27:49 | owncloud/web | https://api.github.com/repos/owncloud/web | closed | Runtime Theming | Category:Enhancement Estimation:L(5) Priority:p2-high | # Requirements
## Themable topics
- Logo on login page
- Logo on top bar
- Primary color
- Secondary color
- Background image / color on login page
- Icon colors? => requires SVG
- Favicon
- Strings
- Title
- Entity
- Base URL
- Slogan
- Logo claim
- Help URL
- Imprint URL
- Privacy Poliy URL
- Branded client download URL
- Help text on public link page
## Requirements
- should work during runtime dynamically
- can be switched on/off
- maybe switching between multiple available themes / theme sets within a theme? (to discuss; e.g., dark/bright mode switchable by users)
- defined set of themable parameters (see above)
- can be extended with additional parameters in future versions
- theme is configurable via WebUI => will need a Phoenix app for that https://github.com/owncloud/enterprise/issues/2187
- reuse branding parameters (e.g naming of parameters) across client platforms to make it easier for admins => e.g., same color schemes on all platform => semantic, platform-independent naming
- Mobile support
- SVG and PNG support for images
| 1.0 | Runtime Theming - # Requirements
## Themable topics
- Logo on login page
- Logo on top bar
- Primary color
- Secondary color
- Background image / color on login page
- Icon colors? => requires SVG
- Favicon
- Strings
- Title
- Entity
- Base URL
- Slogan
- Logo claim
- Help URL
- Imprint URL
- Privacy Poliy URL
- Branded client download URL
- Help text on public link page
## Requirements
- should work during runtime dynamically
- can be switched on/off
- maybe switching between multiple available themes / theme sets within a theme? (to discuss; e.g., dark/bright mode switchable by users)
- defined set of themable parameters (see above)
- can be extended with additional parameters in future versions
- theme is configurable via WebUI => will need a Phoenix app for that https://github.com/owncloud/enterprise/issues/2187
- reuse branding parameters (e.g naming of parameters) across client platforms to make it easier for admins => e.g., same color schemes on all platform => semantic, platform-independent naming
- Mobile support
- SVG and PNG support for images
| priority | runtime theming requirements themable topics logo on login page logo on top bar primary color secondary color background image color on login page icon colors requires svg favicon strings title entity base url slogan logo claim help url imprint url privacy poliy url branded client download url help text on public link page requirements should work during runtime dynamically can be switched on off maybe switching between multiple available themes theme sets within a theme to discuss e g dark bright mode switchable by users defined set of themable parameters see above can be extended with additional parameters in future versions theme is configurable via webui will need a phoenix app for that reuse branding parameters e g naming of parameters across client platforms to make it easier for admins e g same color schemes on all platform semantic platform independent naming mobile support svg and png support for images | 1 |
355,231 | 10,577,778,871 | IssuesEvent | 2019-10-07 20:53:12 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | [jit] training attribute broken in recursive script | high priority jit triaged | ## 🐛 Bug
the model was scripted in eval mode and then saved with torch.jit.save.
torch.jit.load loads the model in training mode.
this is a problem, because in training mode dropouts are enabled, and also batchnorm needs to be adjusted. we expected torch.jit.load to load the model in the mode it was saved in. please fix this.
## To Reproduce
**Steps to reproduce the behavior:**
```
tamas@super-duper-compute-machine:~/JUPYTER_LAB$ ipython
Python 3.7.3 (default, Mar 27 2019, 22:11:17)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.6.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import torch
In [2]: torch.__version__
Out[2]: '1.2.0'
In [3]: class MyCell(torch.nn.Module):
...: def __init__(self):
...: super(MyCell, self).__init__()
...: self.linear = torch.nn.Linear(4, 4)
...:
...: def forward(self, x, h):
...: new_h = torch.tanh(self.linear(x) + h)
...: return new_h, new_h
...:
...: model = MyCell()
...: x, h = torch.rand(3, 4), torch.rand(3, 4)
...: scripted_model = torch.jit.script(model, (x, h))
/home/tamas/anaconda3/envs/cuda/lib/python3.7/site-packages/torch/jit/__init__.py:1158: UserWarning: `optimize` is deprecated and has no effect. Use `with torch.jit.optimized_execution() instead
warnings.warn("`optimize` is deprecated and has no effect. Use `with torch.jit.optimized_execution() instead")
In [4]: scripted_model.code
Out[4]: 'def forward(self,\n x: Tensor,\n h: Tensor) -> Tuple[Tensor, Tensor]:\n _0 = self.linear\n _1 = _0.weight\n _2 = _0.bias\n if torch.eq(torch.dim(x), 2):\n _3 = torch.__isnot__(_2, None)\n else:\n _3 = False\n if _3:\n bias = ops.prim.unchecked_unwrap_optional(_2)\n ret = torch.addmm(bias, x, torch.t(_1), beta=1, alpha=1)\n else:\n output = torch.matmul(x, torch.t(_1))\n if torch.__isnot__(_2, None):\n bias0 = ops.prim.unchecked_unwrap_optional(_2)\n output0 = torch.add_(output, bias0, alpha=1)\n else:\n output0 = output\n ret = output0\n new_h = torch.tanh(torch.add(ret, h, alpha=1))\n return (new_h, new_h)\n'
In [5]: scripted_model.training
Out[5]: True
In [6]: scripted_model.eval()
Out[6]:
WeakScriptModuleProxy(
(linear): WeakScriptModuleProxy()
)
In [7]: scripted_model.training
Out[7]: False
In [8]: torch.jit.save(scripted_model, 'temp.pt')
In [9]: m = torch.jit.load('temp.pt')
In [10]: m.training
Out[10]: True
```
cc @ezyang @gchanan @zou3519 @suo | 1.0 | [jit] training attribute broken in recursive script - ## 🐛 Bug
the model was scripted in eval mode and then saved with torch.jit.save.
torch.jit.load loads the model in training mode.
this is a problem, because in training mode dropouts are enabled, and also batchnorm needs to be adjusted. we expected torch.jit.load to load the model in the mode it was saved in. please fix this.
## To Reproduce
**Steps to reproduce the behavior:**
```
tamas@super-duper-compute-machine:~/JUPYTER_LAB$ ipython
Python 3.7.3 (default, Mar 27 2019, 22:11:17)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.6.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import torch
In [2]: torch.__version__
Out[2]: '1.2.0'
In [3]: class MyCell(torch.nn.Module):
...: def __init__(self):
...: super(MyCell, self).__init__()
...: self.linear = torch.nn.Linear(4, 4)
...:
...: def forward(self, x, h):
...: new_h = torch.tanh(self.linear(x) + h)
...: return new_h, new_h
...:
...: model = MyCell()
...: x, h = torch.rand(3, 4), torch.rand(3, 4)
...: scripted_model = torch.jit.script(model, (x, h))
/home/tamas/anaconda3/envs/cuda/lib/python3.7/site-packages/torch/jit/__init__.py:1158: UserWarning: `optimize` is deprecated and has no effect. Use `with torch.jit.optimized_execution() instead
warnings.warn("`optimize` is deprecated and has no effect. Use `with torch.jit.optimized_execution() instead")
In [4]: scripted_model.code
Out[4]: 'def forward(self,\n x: Tensor,\n h: Tensor) -> Tuple[Tensor, Tensor]:\n _0 = self.linear\n _1 = _0.weight\n _2 = _0.bias\n if torch.eq(torch.dim(x), 2):\n _3 = torch.__isnot__(_2, None)\n else:\n _3 = False\n if _3:\n bias = ops.prim.unchecked_unwrap_optional(_2)\n ret = torch.addmm(bias, x, torch.t(_1), beta=1, alpha=1)\n else:\n output = torch.matmul(x, torch.t(_1))\n if torch.__isnot__(_2, None):\n bias0 = ops.prim.unchecked_unwrap_optional(_2)\n output0 = torch.add_(output, bias0, alpha=1)\n else:\n output0 = output\n ret = output0\n new_h = torch.tanh(torch.add(ret, h, alpha=1))\n return (new_h, new_h)\n'
In [5]: scripted_model.training
Out[5]: True
In [6]: scripted_model.eval()
Out[6]:
WeakScriptModuleProxy(
(linear): WeakScriptModuleProxy()
)
In [7]: scripted_model.training
Out[7]: False
In [8]: torch.jit.save(scripted_model, 'temp.pt')
In [9]: m = torch.jit.load('temp.pt')
In [10]: m.training
Out[10]: True
```
cc @ezyang @gchanan @zou3519 @suo | priority | training attribute broken in recursive script 🐛 bug the model was scripted in eval mode and then saved with torch jit save torch jit load loads the model in training mode this is a problem because in training mode dropouts are enabled and also batchnorm needs to be adjusted we expected torch jit load to load the model in the mode it was saved in please fix this to reproduce steps to reproduce the behavior tamas super duper compute machine jupyter lab ipython python default mar type copyright credits or license for more information ipython an enhanced interactive python type for help in import torch in torch version out in class mycell torch nn module def init self super mycell self init self linear torch nn linear def forward self x h new h torch tanh self linear x h return new h new h model mycell x h torch rand torch rand scripted model torch jit script model x h home tamas envs cuda lib site packages torch jit init py userwarning optimize is deprecated and has no effect use with torch jit optimized execution instead warnings warn optimize is deprecated and has no effect use with torch jit optimized execution instead in scripted model code out def forward self n x tensor n h tensor tuple n self linear n weight n bias n if torch eq torch dim x n torch isnot none n else n false n if n bias ops prim unchecked unwrap optional n ret torch addmm bias x torch t beta alpha n else n output torch matmul x torch t n if torch isnot none n ops prim unchecked unwrap optional n torch add output alpha n else n output n ret n new h torch tanh torch add ret h alpha n return new h new h n in scripted model training out true in scripted model eval out weakscriptmoduleproxy linear weakscriptmoduleproxy in scripted model training out false in torch jit save scripted model temp pt in m torch jit load temp pt in m training out true cc ezyang gchanan suo | 1 |
823,787 | 31,033,704,465 | IssuesEvent | 2023-08-10 14:00:45 | freeorion/freeorion | https://api.github.com/repos/freeorion/freeorion | closed | Compile error | category:bug priority:high | Current master does not compile in my enviroment (g++ (Debian 10.2.1-6) 10.2.1 20210110)
```
[ 0%] Building CXX object CMakeFiles/freeorioncommon.dir/util/StringTable.cpp.o
(lots of warnings skipped)
...
util/StringTable.cpp:312:48: error: no match for ‘operator|’ (operand types are ‘boost::unordered::unordered_map<std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char>, StringTable::hasher, StringTable::equalizer>’ and ‘const std::ranges::views::__adaptor::_RangeAdaptorClosure<<lambda(_Range&&)> >’)
312 | for (auto& user_read_entry : m_strings | range_values) {
| ~~~~~~~~~ ^ ~~~~~~~~~~~~
| | |
| | const std::ranges::views::__adaptor::_RangeAdaptorClosure<<lambda(_Range&&)> >
| boost::unordered::unordered_map<std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char>, StringTable::hasher, StringTable::equalizer>
In file included from /usr/include/c++/10/bits/ranges_algobase.h:38,
from /usr/include/c++/10/bits/ranges_uninitialized.h:36,
from /usr/include/c++/10/memory:69,
from /usr/include/boost/container_hash/extensions.hpp:35,
from /usr/include/boost/container_hash/hash.hpp:761,
from /usr/include/boost/functional/hash.hpp:6,
from /usr/include/boost/unordered/unordered_map.hpp:18,
from /usr/include/boost/unordered_map.hpp:17,
from /home/muzz/freeorion-project/Grummel7/util/StringTable.h:7,
from /home/muzz/freeorion-project/Grummel7/util/StringTable.cpp:1:
``` | 1.0 | Compile error - Current master does not compile in my enviroment (g++ (Debian 10.2.1-6) 10.2.1 20210110)
```
[ 0%] Building CXX object CMakeFiles/freeorioncommon.dir/util/StringTable.cpp.o
(lots of warnings skipped)
...
util/StringTable.cpp:312:48: error: no match for ‘operator|’ (operand types are ‘boost::unordered::unordered_map<std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char>, StringTable::hasher, StringTable::equalizer>’ and ‘const std::ranges::views::__adaptor::_RangeAdaptorClosure<<lambda(_Range&&)> >’)
312 | for (auto& user_read_entry : m_strings | range_values) {
| ~~~~~~~~~ ^ ~~~~~~~~~~~~
| | |
| | const std::ranges::views::__adaptor::_RangeAdaptorClosure<<lambda(_Range&&)> >
| boost::unordered::unordered_map<std::__cxx11::basic_string<char>, std::__cxx11::basic_string<char>, StringTable::hasher, StringTable::equalizer>
In file included from /usr/include/c++/10/bits/ranges_algobase.h:38,
from /usr/include/c++/10/bits/ranges_uninitialized.h:36,
from /usr/include/c++/10/memory:69,
from /usr/include/boost/container_hash/extensions.hpp:35,
from /usr/include/boost/container_hash/hash.hpp:761,
from /usr/include/boost/functional/hash.hpp:6,
from /usr/include/boost/unordered/unordered_map.hpp:18,
from /usr/include/boost/unordered_map.hpp:17,
from /home/muzz/freeorion-project/Grummel7/util/StringTable.h:7,
from /home/muzz/freeorion-project/Grummel7/util/StringTable.cpp:1:
``` | priority | compile error current master does not compile in my enviroment g debian building cxx object cmakefiles freeorioncommon dir util stringtable cpp o lots of warnings skipped util stringtable cpp error no match for ‘operator ’ operand types are ‘boost unordered unordered map std basic string stringtable hasher stringtable equalizer ’ and ‘const std ranges views adaptor rangeadaptorclosure ’ for auto user read entry m strings range values const std ranges views adaptor rangeadaptorclosure boost unordered unordered map std basic string stringtable hasher stringtable equalizer in file included from usr include c bits ranges algobase h from usr include c bits ranges uninitialized h from usr include c memory from usr include boost container hash extensions hpp from usr include boost container hash hash hpp from usr include boost functional hash hpp from usr include boost unordered unordered map hpp from usr include boost unordered map hpp from home muzz freeorion project util stringtable h from home muzz freeorion project util stringtable cpp | 1 |
563,611 | 16,701,908,536 | IssuesEvent | 2021-06-09 04:32:26 | bounswe/2021SpringGroup10 | https://api.github.com/repos/bounswe/2021SpringGroup10 | closed | I need some help to migrate my database from SQLAlchemy to MongoDB | Coding: Backend Coding: Frontend Platform: Web Priority: High | My api was working properly before migrating the database from SQLAlchemy to MongoDb. Now I have a couple of test cases that are problematic. I would appreciate your help with MongoDb. | 1.0 | I need some help to migrate my database from SQLAlchemy to MongoDB - My api was working properly before migrating the database from SQLAlchemy to MongoDb. Now I have a couple of test cases that are problematic. I would appreciate your help with MongoDb. | priority | i need some help to migrate my database from sqlalchemy to mongodb my api was working properly before migrating the database from sqlalchemy to mongodb now i have a couple of test cases that are problematic i would appreciate your help with mongodb | 1 |
240,580 | 7,803,145,975 | IssuesEvent | 2018-06-10 20:20:33 | usemoslinux/langx | https://api.github.com/repos/usemoslinux/langx | opened | showtext.php: underline forgotten words in red | Priority: high Type: improvement | It will show better which words need more attention. | 1.0 | showtext.php: underline forgotten words in red - It will show better which words need more attention. | priority | showtext php underline forgotten words in red it will show better which words need more attention | 1 |
730,315 | 25,167,893,462 | IssuesEvent | 2022-11-10 22:50:59 | robsavoye/osmcleaner | https://api.github.com/repos/robsavoye/osmcleaner | opened | Create code stubs | enhancement high priority | To give others a framework to work within, some base classes and minimal infrastructure should be setup. The code would just be classes and a few methods, and lacking any code that actually does anything. That's for the hackathon.
| 1.0 | Create code stubs - To give others a framework to work within, some base classes and minimal infrastructure should be setup. The code would just be classes and a few methods, and lacking any code that actually does anything. That's for the hackathon.
| priority | create code stubs to give others a framework to work within some base classes and minimal infrastructure should be setup the code would just be classes and a few methods and lacking any code that actually does anything that s for the hackathon | 1 |
824,417 | 31,154,924,932 | IssuesEvent | 2023-08-16 12:33:41 | DroidKaigi/conference-app-2023 | https://api.github.com/repos/DroidKaigi/conference-app-2023 | closed | Implement Floor Switch and Highlighting Gradient on FloorMapScreen | welcome contribute high priority | **Idea Description**
Add a switch to change the floor number on the FloorMapScreen, along with a gradient to make it stand out, as per the design on [Figma](https://www.figma.com/file/MbElhCEnjqnuodmvwabh9K/DroidKaigi-2023-App-UI). The task includes:
- Implementing a switch that allows users to change the floor number.
- Adding a gradient to highlight the floor switch.
The main goal is to replicate the design and functionality as shown in the provided design.
**Reference images and links**
https://www.figma.com/file/MbElhCEnjqnuodmvwabh9K/DroidKaigi-2023-App-UI?type=design&node-id=55876-19767&mode=dev

| 1.0 | Implement Floor Switch and Highlighting Gradient on FloorMapScreen - **Idea Description**
Add a switch to change the floor number on the FloorMapScreen, along with a gradient to make it stand out, as per the design on [Figma](https://www.figma.com/file/MbElhCEnjqnuodmvwabh9K/DroidKaigi-2023-App-UI). The task includes:
- Implementing a switch that allows users to change the floor number.
- Adding a gradient to highlight the floor switch.
The main goal is to replicate the design and functionality as shown in the provided design.
**Reference images and links**
https://www.figma.com/file/MbElhCEnjqnuodmvwabh9K/DroidKaigi-2023-App-UI?type=design&node-id=55876-19767&mode=dev

| priority | implement floor switch and highlighting gradient on floormapscreen idea description add a switch to change the floor number on the floormapscreen along with a gradient to make it stand out as per the design on the task includes implementing a switch that allows users to change the floor number adding a gradient to highlight the floor switch the main goal is to replicate the design and functionality as shown in the provided design reference images and links | 1 |
237,992 | 7,768,895,892 | IssuesEvent | 2018-06-03 23:07:52 | korayozgurbuz/SWE573TermProject | https://api.github.com/repos/korayozgurbuz/SWE573TermProject | closed | Static files are not loaded | bug high priority | When i look up Network on chrome console, i saw that the static files (CSS, Javascript, images etc.) are not loaded properly. It was only showing HTML's and i got an 404 "not found "error. | 1.0 | Static files are not loaded - When i look up Network on chrome console, i saw that the static files (CSS, Javascript, images etc.) are not loaded properly. It was only showing HTML's and i got an 404 "not found "error. | priority | static files are not loaded when i look up network on chrome console i saw that the static files css javascript images etc are not loaded properly it was only showing html s and i got an not found error | 1 |
633,387 | 20,253,613,903 | IssuesEvent | 2022-02-14 20:31:26 | PoProstuMieciek/wikipedia-scraper | https://api.github.com/repos/PoProstuMieciek/wikipedia-scraper | closed | feat/images-parser | priority: high type: feat | **AC**
- function
- [x] gets a `JsDom` instance of html document
- [x] looks for all images
- [x] returns a dictionary of links to images | 1.0 | feat/images-parser - **AC**
- function
- [x] gets a `JsDom` instance of html document
- [x] looks for all images
- [x] returns a dictionary of links to images | priority | feat images parser ac function gets a jsdom instance of html document looks for all images returns a dictionary of links to images | 1 |
691,865 | 23,714,509,202 | IssuesEvent | 2022-08-30 10:35:40 | status-im/status-mobile | https://api.github.com/repos/status-im/status-mobile | opened | Stickerpacks installed on 1.19 are not usable on 1.20 | bug high-priority | # Bug Report
## Problem
Installed sticker packs on 1.19 are shown, but there is no action on tapping on the sticker.
Therefore, installed sticker packs are not usable after the upgrade.
#### Expected behavior
Can send sticker
#### Actual behavior
https://user-images.githubusercontent.com/4557972/187415159-1e4c1179-5873-4550-b42e-4cf21a3931ee.mp4
### Reproduction
1) Install 1.19, join any public chat, install stickerpack (i.e. `StatusCat`)
2) Upgrade to 1.20
3) Try to send sticker from installed stickerpack
### Additional Information
- Status version: nightly
- Operating System: checked Android only (still no IOS build available for upgrade)
[comment]: # (Please, add logs/notes if necessary)
[Status-debug-logs.zip](https://github.com/status-im/status-mobile/files/9452070/Status-debug-logs.zip)
| 1.0 | Stickerpacks installed on 1.19 are not usable on 1.20 - # Bug Report
## Problem
Installed sticker packs on 1.19 are shown, but there is no action on tapping on the sticker.
Therefore, installed sticker packs are not usable after the upgrade.
#### Expected behavior
Can send sticker
#### Actual behavior
https://user-images.githubusercontent.com/4557972/187415159-1e4c1179-5873-4550-b42e-4cf21a3931ee.mp4
### Reproduction
1) Install 1.19, join any public chat, install stickerpack (i.e. `StatusCat`)
2) Upgrade to 1.20
3) Try to send sticker from installed stickerpack
### Additional Information
- Status version: nightly
- Operating System: checked Android only (still no IOS build available for upgrade)
[comment]: # (Please, add logs/notes if necessary)
[Status-debug-logs.zip](https://github.com/status-im/status-mobile/files/9452070/Status-debug-logs.zip)
| priority | stickerpacks installed on are not usable on bug report problem installed sticker packs on are shown but there is no action on tapping on the sticker therefore installed sticker packs are not usable after the upgrade expected behavior can send sticker actual behavior reproduction install join any public chat install stickerpack i e statuscat upgrade to try to send sticker from installed stickerpack additional information status version nightly operating system checked android only still no ios build available for upgrade please add logs notes if necessary | 1 |
338,349 | 10,227,819,165 | IssuesEvent | 2019-08-16 22:09:21 | USGS-Astrogeology/PyHAT_Point_Spectra_GUI | https://api.github.com/repos/USGS-Astrogeology/PyHAT_Point_Spectra_GUI | closed | GUI toolbar inactive on OSX | Priority: High bug help wanted | The toolbar does not accept any form of input on OSX High Sierra (version 10.13.2). | 1.0 | GUI toolbar inactive on OSX - The toolbar does not accept any form of input on OSX High Sierra (version 10.13.2). | priority | gui toolbar inactive on osx the toolbar does not accept any form of input on osx high sierra version | 1 |
255,083 | 8,108,389,096 | IssuesEvent | 2018-08-14 01:25:26 | SAUSy-Lab/itinerum-trip-breaker | https://api.github.com/repos/SAUSy-Lab/itinerum-trip-breaker | closed | Improve trip detection | high-priority improvement | Need to fiddle with the inputs to the HMM. We are in general currently underdetecting travel and overdetecting stationarity. | 1.0 | Improve trip detection - Need to fiddle with the inputs to the HMM. We are in general currently underdetecting travel and overdetecting stationarity. | priority | improve trip detection need to fiddle with the inputs to the hmm we are in general currently underdetecting travel and overdetecting stationarity | 1 |
540,585 | 15,814,063,255 | IssuesEvent | 2021-04-05 08:49:14 | AY2021S2-CS2103T-T12-4/tp | https://api.github.com/repos/AY2021S2-CS2103T-T12-4/tp | closed | [PE-D] Error in help message for run command | priority.High severity.Low type.Bug | 
When i copy paste the first example, i get the following error.

In addition, i believe the header is not meant to be repeated here `-h "key: value" -h "key: value"` in the first example although it technically works.
<!--session: 1617429943449-e56bf182-40ef-47bf-9213-d664c9ca18a6-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: samuelfangjw/ped#12 | 1.0 | [PE-D] Error in help message for run command - 
When i copy paste the first example, i get the following error.

In addition, i believe the header is not meant to be repeated here `-h "key: value" -h "key: value"` in the first example although it technically works.
<!--session: 1617429943449-e56bf182-40ef-47bf-9213-d664c9ca18a6-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: samuelfangjw/ped#12 | priority | error in help message for run command when i copy paste the first example i get the following error in addition i believe the header is not meant to be repeated here h key value h key value in the first example although it technically works labels severity low type documentationbug original samuelfangjw ped | 1 |
422,495 | 12,279,292,451 | IssuesEvent | 2020-05-08 11:53:06 | stts-se/pronlex | https://api.github.com/repos/stts-se/pronlex | closed | Calling mysql client with data source name (DSN) | priority:high | Find out how to call mysql client with db_location style param (DSN)
mysql -u <user> -h <host> etc, compared to db_location/data source name/DSN as in 'user:pw@protocol(host:port)'
Problematic for:
- [x] setup.sh
- [x] import.sh
- [x] importSql
- [x] import_fromlexfiles.sh
Currently, these scripts only work if mariadb is on localhost -- this needs to be fixed! | 1.0 | Calling mysql client with data source name (DSN) - Find out how to call mysql client with db_location style param (DSN)
mysql -u <user> -h <host> etc, compared to db_location/data source name/DSN as in 'user:pw@protocol(host:port)'
Problematic for:
- [x] setup.sh
- [x] import.sh
- [x] importSql
- [x] import_fromlexfiles.sh
Currently, these scripts only work if mariadb is on localhost -- this needs to be fixed! | priority | calling mysql client with data source name dsn find out how to call mysql client with db location style param dsn mysql u h etc compared to db location data source name dsn as in user pw protocol host port problematic for setup sh import sh importsql import fromlexfiles sh currently these scripts only work if mariadb is on localhost this needs to be fixed | 1 |
787,908 | 27,735,419,833 | IssuesEvent | 2023-03-15 10:52:59 | fractal-analytics-platform/fractal-server | https://api.github.com/repos/fractal-analytics-platform/fractal-server | closed | Review task-collection failure cases and logs | High Priority | We should make sure that the most comprehensive information about a task-collection failure is always stored in the same place. Notice that failure can happen in different places (examples: during manifest validation, during the venv creation, during `pip install`, while populating the DB, ..).
I think it's currently either in `info` or in `log`, let's review this and fix it as needed.
Ref:
* https://github.com/fractal-analytics-platform/fractal-web/issues/67 | 1.0 | Review task-collection failure cases and logs - We should make sure that the most comprehensive information about a task-collection failure is always stored in the same place. Notice that failure can happen in different places (examples: during manifest validation, during the venv creation, during `pip install`, while populating the DB, ..).
I think it's currently either in `info` or in `log`, let's review this and fix it as needed.
Ref:
* https://github.com/fractal-analytics-platform/fractal-web/issues/67 | priority | review task collection failure cases and logs we should make sure that the most comprehensive information about a task collection failure is always stored in the same place notice that failure can happen in different places examples during manifest validation during the venv creation during pip install while populating the db i think it s currently either in info or in log let s review this and fix it as needed ref | 1 |
556,000 | 16,472,879,885 | IssuesEvent | 2021-05-23 19:15:55 | AFlyingCar/HoI4-Map-Normalizer-Tool | https://api.github.com/repos/AFlyingCar/HoI4-Map-Normalizer-Tool | closed | Implement Version and Automated Version-bump Action | enhancement high priority | The build system should make a constant `MAPNORMALIZER_VERSION` which is set to the contents of a `.config/version.txt` file. Additionally, there should be a set of github actions which can bump this version number when a PR is merged into the master branch. | 1.0 | Implement Version and Automated Version-bump Action - The build system should make a constant `MAPNORMALIZER_VERSION` which is set to the contents of a `.config/version.txt` file. Additionally, there should be a set of github actions which can bump this version number when a PR is merged into the master branch. | priority | implement version and automated version bump action the build system should make a constant mapnormalizer version which is set to the contents of a config version txt file additionally there should be a set of github actions which can bump this version number when a pr is merged into the master branch | 1 |
395,090 | 11,671,487,345 | IssuesEvent | 2020-03-04 03:21:49 | aivivn/d2l-vn | https://api.github.com/repos/aivivn/d2l-vn | closed | Revise "linear-regression_vn" - Phần 1 | chapter:linear-networks priority:high status: phase 2 | Phần này được dịch bởi: **thanhvinhle26**.
Nếu bạn đã dịch phần này, vui lòng bỏ qua việc revise.
| 1.0 | Revise "linear-regression_vn" - Phần 1 - Phần này được dịch bởi: **thanhvinhle26**.
Nếu bạn đã dịch phần này, vui lòng bỏ qua việc revise.
| priority | revise linear regression vn phần phần này được dịch bởi nếu bạn đã dịch phần này vui lòng bỏ qua việc revise | 1 |
32,770 | 2,759,461,326 | IssuesEvent | 2015-04-28 04:07:12 | metapolator/metapolator | https://api.github.com/repos/metapolator/metapolator | closed | Implement UFO export | io / export Priority High UI | https://github.com/metapolator/metapolator/wiki/working-UI-demo#font-export-panel

- [x] design new demo font export panel
- [x] design export duration estimate
- [x] design export progress indication
Here's the images:


- - -
#### Engineering Development (@felipesanches)
- [x] using the zip io class, make it possible to export the selected instances as a downloadable zip
- [x] rework the instance zip filename to use the $instanceName.ufo.zip filename
- [x] rework the directory structure inside the instance zips to be `instanceName/metainfo.plist` etc
- [x] add a final 'wrapper' zip that wraps all the instances into a `metapolator-export-YYYYMMDD-HHMMSS.zip` file, and pass that to saveAs. (Clarification added later: Use this even if only 1 instance is exported.)
- [x] Then its progress bar time :) The specification of the behavior is at https://github.com/metapolator/metapolator/wiki/working-UI-demo#indicating-export-duration
- [x] ~~Place progress bar correctly~~ moved to https://github.com/metapolator/metapolator/issues/496
- [x] ~~Really disable the export button (currently it just dims when no instances exist, and when no instances are checked for export, but it can still be clicked and exports an empty zip)~~ Moved to https://github.com/metapolator/metapolator/issues/502
- [x] ~~create a new opentype io class to make it possible to export the selected instances as a downloadable OTF binary (using https://github.com/nodebox/opentype.js)~~ moved to #514
#### Front End Development(@jeroenbreen)
- [x] _In https://github.com/metapolator/metapolator/issues/347#issuecomment-75925783 the instance list is related to export, "the instance list needs to be synced (items, scroll) with the export column (or rather, the export column is an extension of the instances list)", thus:_ sync the items and scrolling of the instance list panel and the export list panel - the export column could even be part of the instances list panel, and merely becomes visible when the user pans to the 3rd page
- [x] Comment out the opentype items in the Fonts menu, so they do not appear at all to users | 1.0 | Implement UFO export - https://github.com/metapolator/metapolator/wiki/working-UI-demo#font-export-panel

- [x] design new demo font export panel
- [x] design export duration estimate
- [x] design export progress indication
Here's the images:


- - -
#### Engineering Development (@felipesanches)
- [x] using the zip io class, make it possible to export the selected instances as a downloadable zip
- [x] rework the instance zip filename to use the $instanceName.ufo.zip filename
- [x] rework the directory structure inside the instance zips to be `instanceName/metainfo.plist` etc
- [x] add a final 'wrapper' zip that wraps all the instances into a `metapolator-export-YYYYMMDD-HHMMSS.zip` file, and pass that to saveAs. (Clarification added later: Use this even if only 1 instance is exported.)
- [x] Then its progress bar time :) The specification of the behavior is at https://github.com/metapolator/metapolator/wiki/working-UI-demo#indicating-export-duration
- [x] ~~Place progress bar correctly~~ moved to https://github.com/metapolator/metapolator/issues/496
- [x] ~~Really disable the export button (currently it just dims when no instances exist, and when no instances are checked for export, but it can still be clicked and exports an empty zip)~~ Moved to https://github.com/metapolator/metapolator/issues/502
- [x] ~~create a new opentype io class to make it possible to export the selected instances as a downloadable OTF binary (using https://github.com/nodebox/opentype.js)~~ moved to #514
#### Front End Development(@jeroenbreen)
- [x] _In https://github.com/metapolator/metapolator/issues/347#issuecomment-75925783 the instance list is related to export, "the instance list needs to be synced (items, scroll) with the export column (or rather, the export column is an extension of the instances list)", thus:_ sync the items and scrolling of the instance list panel and the export list panel - the export column could even be part of the instances list panel, and merely becomes visible when the user pans to the 3rd page
- [x] Comment out the opentype items in the Fonts menu, so they do not appear at all to users | priority | implement ufo export design new demo font export panel design export duration estimate design export progress indication here s the images engineering development felipesanches using the zip io class make it possible to export the selected instances as a downloadable zip rework the instance zip filename to use the instancename ufo zip filename rework the directory structure inside the instance zips to be instancename metainfo plist etc add a final wrapper zip that wraps all the instances into a metapolator export yyyymmdd hhmmss zip file and pass that to saveas clarification added later use this even if only instance is exported then its progress bar time the specification of the behavior is at place progress bar correctly moved to really disable the export button currently it just dims when no instances exist and when no instances are checked for export but it can still be clicked and exports an empty zip moved to create a new opentype io class to make it possible to export the selected instances as a downloadable otf binary using moved to front end development jeroenbreen in the instance list is related to export the instance list needs to be synced items scroll with the export column or rather the export column is an extension of the instances list thus sync the items and scrolling of the instance list panel and the export list panel the export column could even be part of the instances list panel and merely becomes visible when the user pans to the page comment out the opentype items in the fonts menu so they do not appear at all to users | 1 |
451,416 | 13,035,233,530 | IssuesEvent | 2020-07-28 10:01:21 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | opened | import mslice.cli Hard Crash | High Priority ISIS Team: CoreTeam ISIS Team: Spectroscopy Workbench | **Original reporter:** Tatiana / Duc
### Expected behavior
Mslice functionality in a script should work seemlessly: https://mantidproject.github.io/mslice/cli.html
### Actual behavior
Crash when importing mslice
### Steps to reproduce the behavior
Open a workbench package on any OS
Run in script editor `import mslice.cli as mc`
### Platforms affected
Workbench (4.2,5.0,Nightly) on All OS I think [This does not affect Plot] | 1.0 | import mslice.cli Hard Crash - **Original reporter:** Tatiana / Duc
### Expected behavior
Mslice functionality in a script should work seemlessly: https://mantidproject.github.io/mslice/cli.html
### Actual behavior
Crash when importing mslice
### Steps to reproduce the behavior
Open a workbench package on any OS
Run in script editor `import mslice.cli as mc`
### Platforms affected
Workbench (4.2,5.0,Nightly) on All OS I think [This does not affect Plot] | priority | import mslice cli hard crash original reporter tatiana duc expected behavior mslice functionality in a script should work seemlessly actual behavior crash when importing mslice steps to reproduce the behavior open a workbench package on any os run in script editor import mslice cli as mc platforms affected workbench nightly on all os i think | 1 |
676,513 | 23,127,106,923 | IssuesEvent | 2022-07-28 07:01:00 | CaptnJohn/ucommon-event-channels | https://api.github.com/repos/CaptnJohn/ucommon-event-channels | closed | Add void event channel | type:feature priority:high | Create a void event channel that can be used for events that does not need any information passed with them. | 1.0 | Add void event channel - Create a void event channel that can be used for events that does not need any information passed with them. | priority | add void event channel create a void event channel that can be used for events that does not need any information passed with them | 1 |
74,379 | 3,439,176,731 | IssuesEvent | 2015-12-14 07:56:37 | ITDevLtd/MCVirt | https://api.github.com/repos/ITDevLtd/MCVirt | opened | Fix issue when permissions are set on git-backed config repositories | High Priority | Traceback (most recent call last):
File "/usr/bin/mcvirt", line 38, in <module>
parser_object.parse_arguments()
File "/usr/lib/mcvirt/parser.py", line 833, in parse_arguments
vm_object.setLockState(LockStates(LockStates.UNLOCKED))
File "/usr/lib/mcvirt/virtual_machine/virtual_machine.py", line 1278, in setLockState
(self.getName(), lock_status.name))
File "/usr/lib/mcvirt/config_file.py", line 52, in updateConfig
self.setConfigPermissions()
File "/usr/lib/mcvirt/config_file.py", line 99, in setConfigPermissions
setPermission(os.path.join(path, 'vm'), directory=True)
File "/usr/lib/mcvirt/config_file.py", line 89, in setPermission
os.chown(path, owner, 0)
OSError: [Errno 2] No such file or directory: '/var/lib/mcvirt/.git/vm | 1.0 | Fix issue when permissions are set on git-backed config repositories - Traceback (most recent call last):
File "/usr/bin/mcvirt", line 38, in <module>
parser_object.parse_arguments()
File "/usr/lib/mcvirt/parser.py", line 833, in parse_arguments
vm_object.setLockState(LockStates(LockStates.UNLOCKED))
File "/usr/lib/mcvirt/virtual_machine/virtual_machine.py", line 1278, in setLockState
(self.getName(), lock_status.name))
File "/usr/lib/mcvirt/config_file.py", line 52, in updateConfig
self.setConfigPermissions()
File "/usr/lib/mcvirt/config_file.py", line 99, in setConfigPermissions
setPermission(os.path.join(path, 'vm'), directory=True)
File "/usr/lib/mcvirt/config_file.py", line 89, in setPermission
os.chown(path, owner, 0)
OSError: [Errno 2] No such file or directory: '/var/lib/mcvirt/.git/vm | priority | fix issue when permissions are set on git backed config repositories traceback most recent call last file usr bin mcvirt line in parser object parse arguments file usr lib mcvirt parser py line in parse arguments vm object setlockstate lockstates lockstates unlocked file usr lib mcvirt virtual machine virtual machine py line in setlockstate self getname lock status name file usr lib mcvirt config file py line in updateconfig self setconfigpermissions file usr lib mcvirt config file py line in setconfigpermissions setpermission os path join path vm directory true file usr lib mcvirt config file py line in setpermission os chown path owner oserror no such file or directory var lib mcvirt git vm | 1 |
214,869 | 7,278,908,538 | IssuesEvent | 2018-02-22 01:15:38 | opencollective/opencollective | https://api.github.com/repos/opencollective/opencollective | closed | Fix cc dropdown on Subscriptions page when no cc | bug high priority | Load CC dropdown with no credit card (only "Add Credit Card"). It never triggers Stripe text box to enter cc. | 1.0 | Fix cc dropdown on Subscriptions page when no cc - Load CC dropdown with no credit card (only "Add Credit Card"). It never triggers Stripe text box to enter cc. | priority | fix cc dropdown on subscriptions page when no cc load cc dropdown with no credit card only add credit card it never triggers stripe text box to enter cc | 1 |
511,612 | 14,878,076,020 | IssuesEvent | 2021-01-20 04:50:56 | ud-cis-discord/SageV2 | https://api.github.com/repos/ud-cis-discord/SageV2 | closed | Role Handler | Priority High | - [x] Database storage
- [x] Event listeners
- [x] GuildMemberAdd
- [x] GuildMemberUpdate
- [ ] Duplicate check | 1.0 | Role Handler - - [x] Database storage
- [x] Event listeners
- [x] GuildMemberAdd
- [x] GuildMemberUpdate
- [ ] Duplicate check | priority | role handler database storage event listeners guildmemberadd guildmemberupdate duplicate check | 1 |
285,401 | 8,758,337,559 | IssuesEvent | 2018-12-15 02:38:02 | phetsims/aqua | https://api.github.com/repos/phetsims/aqua | closed | restart CT | priority:2-high | aqua needs to be pulled because of https://github.com/phetsims/tasks/issues/972#issuecomment-447521519. We added a package.json to aqua, but I don't think ct can pull aqua while it is running. Lint-everything will most likely fail until this is done. | 1.0 | restart CT - aqua needs to be pulled because of https://github.com/phetsims/tasks/issues/972#issuecomment-447521519. We added a package.json to aqua, but I don't think ct can pull aqua while it is running. Lint-everything will most likely fail until this is done. | priority | restart ct aqua needs to be pulled because of we added a package json to aqua but i don t think ct can pull aqua while it is running lint everything will most likely fail until this is done | 1 |
325,092 | 9,916,872,683 | IssuesEvent | 2019-06-28 21:26:39 | python/mypy | https://api.github.com/repos/python/mypy | opened | New analyzer: crash using type variable with forward referenced bound in annotation | crash new-semantic-analyzer priority-0-high | This was introduced by #6563.
Minimized from a crash in S:
```
from typing import Callable, TypeVar
FooT = TypeVar('FooT', bound='Foo')
class Foo: ...
asdf = lambda x: True # type: Callable[[FooT], bool]
```
References to `FooT` in method definitions is fine. If `class Foo` is moved above `FooT` the problem is resolved. | 1.0 | New analyzer: crash using type variable with forward referenced bound in annotation - This was introduced by #6563.
Minimized from a crash in S:
```
from typing import Callable, TypeVar
FooT = TypeVar('FooT', bound='Foo')
class Foo: ...
asdf = lambda x: True # type: Callable[[FooT], bool]
```
References to `FooT` in method definitions is fine. If `class Foo` is moved above `FooT` the problem is resolved. | priority | new analyzer crash using type variable with forward referenced bound in annotation this was introduced by minimized from a crash in s from typing import callable typevar foot typevar foot bound foo class foo asdf lambda x true type callable bool references to foot in method definitions is fine if class foo is moved above foot the problem is resolved | 1 |
447,266 | 12,887,257,514 | IssuesEvent | 2020-07-13 10:54:08 | useIceberg/iceberg-editor | https://api.github.com/repos/useIceberg/iceberg-editor | opened | ISBAT open the custom palette and custom typography controls with G8.5+ | [Priority] High [Priority] Low [Type] Bug | ### Describe the bug:
I should be able to open the custom editor theme/custom typography controls popover, when using Gutenberg 8.5+ and WordPress 5.5.
### Expected behavior:
Works as it did prior to Gutenberg 8.5. Clicking "Edit typography" should bring up the typographic controls without closing the popover. Custom color palette should do the same.
### Screenshots:

### Isolating the problem:
<!-- Mark completed items with an [x]. -->
- [x] This bug happens with no other plugins activated
- [x] This bug happens with the default Twenty Twenty WordPress theme active
- [ ] This bug happens **without** the Gutenberg plugin active
- [x] I can reproduce this bug consistently using the steps above
### WordPress version:
5.4
### Gutenberg version:
8.5.1
| 2.0 | ISBAT open the custom palette and custom typography controls with G8.5+ - ### Describe the bug:
I should be able to open the custom editor theme/custom typography controls popover, when using Gutenberg 8.5+ and WordPress 5.5.
### Expected behavior:
Works as it did prior to Gutenberg 8.5. Clicking "Edit typography" should bring up the typographic controls without closing the popover. Custom color palette should do the same.
### Screenshots:

### Isolating the problem:
<!-- Mark completed items with an [x]. -->
- [x] This bug happens with no other plugins activated
- [x] This bug happens with the default Twenty Twenty WordPress theme active
- [ ] This bug happens **without** the Gutenberg plugin active
- [x] I can reproduce this bug consistently using the steps above
### WordPress version:
5.4
### Gutenberg version:
8.5.1
| priority | isbat open the custom palette and custom typography controls with describe the bug i should be able to open the custom editor theme custom typography controls popover when using gutenberg and wordpress expected behavior works as it did prior to gutenberg clicking edit typography should bring up the typographic controls without closing the popover custom color palette should do the same screenshots isolating the problem this bug happens with no other plugins activated this bug happens with the default twenty twenty wordpress theme active this bug happens without the gutenberg plugin active i can reproduce this bug consistently using the steps above wordpress version gutenberg version | 1 |
322,842 | 9,829,274,350 | IssuesEvent | 2019-06-15 19:11:37 | red-hat-storage/ocs-ci | https://api.github.com/repos/red-hat-storage/ocs-ci | closed | Need a function to automatically determine ceph tool box pod name and send ceph commands | High Priority RFE | While writing test cases, We may have to run ceph commands to get to know something about the ceph cluster - health, size, pools, etc. So in our test framework, I expect a function I can call and this will internally go find the ceph toolbox pod's name, convert it into 'oc exec <tool-box> <cmd>' and give me the output in json format which can be parsed. | 1.0 | Need a function to automatically determine ceph tool box pod name and send ceph commands - While writing test cases, We may have to run ceph commands to get to know something about the ceph cluster - health, size, pools, etc. So in our test framework, I expect a function I can call and this will internally go find the ceph toolbox pod's name, convert it into 'oc exec <tool-box> <cmd>' and give me the output in json format which can be parsed. | priority | need a function to automatically determine ceph tool box pod name and send ceph commands while writing test cases we may have to run ceph commands to get to know something about the ceph cluster health size pools etc so in our test framework i expect a function i can call and this will internally go find the ceph toolbox pod s name convert it into oc exec and give me the output in json format which can be parsed | 1 |
344,421 | 10,344,427,859 | IssuesEvent | 2019-09-04 11:12:07 | apifytech/apify-js | https://api.github.com/repos/apifytech/apify-js | closed | Create requestLikeBrowser function | enhancement high priority | It will download HTML using the `request` package, but it will emulate HTTP headers of normal browser to reduce the chance of bot detection. Once done and tested, we should use this function in `CheerioCrawler` by default.
In the first version, let's just emulate Firefox with the latest user agent. In the future, we could support other browsers and user agents, so make the function the way that its functionality might be extended in the future, e.g. have there some `options` param.
Here's a code snippet that can be used for start.
```js
const gzip = Promise.promisify(zlib.gzip, { context: zlib });
const gunzip = Promise.promisify(zlib.gunzip, { context: zlib });
const deflate = Promise.promisify(zlib.deflate, { context: zlib });
const reqOpts = {
url,
// Emulate Firefox HTTP headers
// TODO: We should move this to apify-js or apify-shared-js
headers: {
Host: parsedUrlModified.host,
'User-Agent': useMobileVersion ? FIREFOX_MOBILE_USER_AGENT : FIREFOX_DESKTOP_USER_AGENT,
Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': languageCode ? `${languageCode}-${countryCode},${languageCode};q=0.5` : '*', // TODO: get this from country !
'Accept-Encoding': 'gzip, deflate, br',
DNT: '1',
Connection: 'keep-alive',
'Upgrade-Insecure-Requests': '1',
},
// Return response as raw Buffer
encoding: null,
};
const result = await utils.requestPromised(reqOpts, false);
let body;
try {
// eslint-disable-next-line prefer-destructuring
body = result.body;
// Decode response body
const contentEncoding = result.response.headers['content-encoding'];
switch (contentEncoding) {
case 'br':
body = await brotli.decompress(body);
break;
case 'gzip':
body = await gunzip(body);
break;
case 'deflate':
body = await deflate(body);
break;
case 'identity':
case null:
case undefined:
break;
default:
throw new Error(`Received unexpected Content-Encoding: ${contentEncoding}`);
}
body = body.toString('utf8');
const { statusCode } = result;
if (statusCode !== 200) {
throw new Error(`Received HTTP error response status ${statusCode}`);
}
const contentType = result.response.headers['content-type'];
if (contentType !== 'text/html; charset=UTF-8') {
throw new Error(`Received unexpected Content-Type: ${contentType}`);
}
if (!body) throw new Error('The response body is empty');
``` | 1.0 | Create requestLikeBrowser function - It will download HTML using the `request` package, but it will emulate HTTP headers of normal browser to reduce the chance of bot detection. Once done and tested, we should use this function in `CheerioCrawler` by default.
In the first version, let's just emulate Firefox with the latest user agent. In the future, we could support other browsers and user agents, so make the function the way that its functionality might be extended in the future, e.g. have there some `options` param.
Here's a code snippet that can be used for start.
```js
const gzip = Promise.promisify(zlib.gzip, { context: zlib });
const gunzip = Promise.promisify(zlib.gunzip, { context: zlib });
const deflate = Promise.promisify(zlib.deflate, { context: zlib });
const reqOpts = {
url,
// Emulate Firefox HTTP headers
// TODO: We should move this to apify-js or apify-shared-js
headers: {
Host: parsedUrlModified.host,
'User-Agent': useMobileVersion ? FIREFOX_MOBILE_USER_AGENT : FIREFOX_DESKTOP_USER_AGENT,
Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': languageCode ? `${languageCode}-${countryCode},${languageCode};q=0.5` : '*', // TODO: get this from country !
'Accept-Encoding': 'gzip, deflate, br',
DNT: '1',
Connection: 'keep-alive',
'Upgrade-Insecure-Requests': '1',
},
// Return response as raw Buffer
encoding: null,
};
const result = await utils.requestPromised(reqOpts, false);
let body;
try {
// eslint-disable-next-line prefer-destructuring
body = result.body;
// Decode response body
const contentEncoding = result.response.headers['content-encoding'];
switch (contentEncoding) {
case 'br':
body = await brotli.decompress(body);
break;
case 'gzip':
body = await gunzip(body);
break;
case 'deflate':
body = await deflate(body);
break;
case 'identity':
case null:
case undefined:
break;
default:
throw new Error(`Received unexpected Content-Encoding: ${contentEncoding}`);
}
body = body.toString('utf8');
const { statusCode } = result;
if (statusCode !== 200) {
throw new Error(`Received HTTP error response status ${statusCode}`);
}
const contentType = result.response.headers['content-type'];
if (contentType !== 'text/html; charset=UTF-8') {
throw new Error(`Received unexpected Content-Type: ${contentType}`);
}
if (!body) throw new Error('The response body is empty');
``` | priority | create requestlikebrowser function it will download html using the request package but it will emulate http headers of normal browser to reduce the chance of bot detection once done and tested we should use this function in cheeriocrawler by default in the first version let s just emulate firefox with the latest user agent in the future we could support other browsers and user agents so make the function the way that its functionality might be extended in the future e g have there some options param here s a code snippet that can be used for start js const gzip promise promisify zlib gzip context zlib const gunzip promise promisify zlib gunzip context zlib const deflate promise promisify zlib deflate context zlib const reqopts url emulate firefox http headers todo we should move this to apify js or apify shared js headers host parsedurlmodified host user agent usemobileversion firefox mobile user agent firefox desktop user agent accept text html application xhtml xml application xml q q accept language languagecode languagecode countrycode languagecode q todo get this from country accept encoding gzip deflate br dnt connection keep alive upgrade insecure requests return response as raw buffer encoding null const result await utils requestpromised reqopts false let body try eslint disable next line prefer destructuring body result body decode response body const contentencoding result response headers switch contentencoding case br body await brotli decompress body break case gzip body await gunzip body break case deflate body await deflate body break case identity case null case undefined break default throw new error received unexpected content encoding contentencoding body body tostring const statuscode result if statuscode throw new error received http error response status statuscode const contenttype result response headers if contenttype text html charset utf throw new error received unexpected content type contenttype if body throw new error the response body is empty | 1 |
806,188 | 29,805,160,931 | IssuesEvent | 2023-06-16 11:04:38 | proveuswrong/webapp-tp | https://api.github.com/repos/proveuswrong/webapp-tp | closed | Handle Bad Item Routes | priority: high type: enhancement | Refactor out claim component from claim route. Implement handling in the route component. | 1.0 | Handle Bad Item Routes - Refactor out claim component from claim route. Implement handling in the route component. | priority | handle bad item routes refactor out claim component from claim route implement handling in the route component | 1 |
86,294 | 3,710,049,264 | IssuesEvent | 2016-03-02 01:36:53 | cs2103jan2016-w09-2j/main | https://api.github.com/repos/cs2103jan2016-w09-2j/main | closed | The user can include more details to tasks | priority.high type.story | such as contacts, locations, relevant email and projects the task is linked to etc. | 1.0 | The user can include more details to tasks - such as contacts, locations, relevant email and projects the task is linked to etc. | priority | the user can include more details to tasks such as contacts locations relevant email and projects the task is linked to etc | 1 |
397,122 | 11,723,866,737 | IssuesEvent | 2020-03-10 09:54:27 | L-Acoustics/avdecc | https://api.github.com/repos/L-Acoustics/avdecc | closed | Prevent ACMP storm | Milan enhancement high priority | The library should limit the number of ACMP messages it sends at the same time:
- Have a limit of ACMP inflight message per entity (just like AECP)
- Have a limit of how fast it sends ACMP messages | 1.0 | Prevent ACMP storm - The library should limit the number of ACMP messages it sends at the same time:
- Have a limit of ACMP inflight message per entity (just like AECP)
- Have a limit of how fast it sends ACMP messages | priority | prevent acmp storm the library should limit the number of acmp messages it sends at the same time have a limit of acmp inflight message per entity just like aecp have a limit of how fast it sends acmp messages | 1 |
179,294 | 6,623,490,597 | IssuesEvent | 2017-09-22 07:28:25 | Screen-Art-Studios/Total-Response | https://api.github.com/repos/Screen-Art-Studios/Total-Response | closed | Design Standard | Front End High Priority | The completed design standard should be a pdf with a resolution of 300ppi.
it should also meet all of the following guidlines;
-quality logos and alternates
-all colors used in the project
-styling for buttons
-styling for divs and related boxes
-fonts
-font-sizing, line-height, and line length
-tone of the application
-quality mission statement
-styling for all input boxes
-animation design statement ie"draw the users eye towards the next step, only initiate on user action, etc.."
-animation descriptions
-must look good when printed in booklet format
-reference screen art brand in a meaningful fashion
-be organized in a logical ordering and solid flow. | 1.0 | Design Standard - The completed design standard should be a pdf with a resolution of 300ppi.
it should also meet all of the following guidlines;
-quality logos and alternates
-all colors used in the project
-styling for buttons
-styling for divs and related boxes
-fonts
-font-sizing, line-height, and line length
-tone of the application
-quality mission statement
-styling for all input boxes
-animation design statement ie"draw the users eye towards the next step, only initiate on user action, etc.."
-animation descriptions
-must look good when printed in booklet format
-reference screen art brand in a meaningful fashion
-be organized in a logical ordering and solid flow. | priority | design standard the completed design standard should be a pdf with a resolution of it should also meet all of the following guidlines quality logos and alternates all colors used in the project styling for buttons styling for divs and related boxes fonts font sizing line height and line length tone of the application quality mission statement styling for all input boxes animation design statement ie draw the users eye towards the next step only initiate on user action etc animation descriptions must look good when printed in booklet format reference screen art brand in a meaningful fashion be organized in a logical ordering and solid flow | 1 |
371,666 | 10,979,764,845 | IssuesEvent | 2019-11-30 08:39:42 | fossasia/open-event-frontend | https://api.github.com/repos/fossasia/open-event-frontend | closed | Wizard Step 4/Call for Speakers: Invalid Error Message "End date must be before start date" Speakers and Sessions form | Priority: High bug | Even though the dates are correct, the wizard step 4 (call for speakers) gives an error message as follows:
> Start date & time should be after End date and time
Expected: Only show error if end date is before start date.

| 1.0 | Wizard Step 4/Call for Speakers: Invalid Error Message "End date must be before start date" Speakers and Sessions form - Even though the dates are correct, the wizard step 4 (call for speakers) gives an error message as follows:
> Start date & time should be after End date and time
Expected: Only show error if end date is before start date.

| priority | wizard step call for speakers invalid error message end date must be before start date speakers and sessions form even though the dates are correct the wizard step call for speakers gives an error message as follows start date time should be after end date and time expected only show error if end date is before start date | 1 |
277,922 | 8,634,488,743 | IssuesEvent | 2018-11-22 17:01:20 | fgpv-vpgf/fgpv-vpgf | https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf | closed | WFS Layers Fail to Enter Legend in AutoLegend Mode | bug-type: broken use case priority: high problem: bug | Upon loading a WFS layer via the config file in autolegend mode, the layer will load and draw on the map.
It will not appear in the legend.
In `layer-registry.service`, `synchronizeLayerOrder()`, it will error as `map.legendBlocks` is null. | 1.0 | WFS Layers Fail to Enter Legend in AutoLegend Mode - Upon loading a WFS layer via the config file in autolegend mode, the layer will load and draw on the map.
It will not appear in the legend.
In `layer-registry.service`, `synchronizeLayerOrder()`, it will error as `map.legendBlocks` is null. | priority | wfs layers fail to enter legend in autolegend mode upon loading a wfs layer via the config file in autolegend mode the layer will load and draw on the map it will not appear in the legend in layer registry service synchronizelayerorder it will error as map legendblocks is null | 1 |
234,353 | 7,720,040,125 | IssuesEvent | 2018-05-23 21:27:10 | Zicerite/Gavania-Project | https://api.github.com/repos/Zicerite/Gavania-Project | reopened | Add Level 1-17 Story Quests | High Priority enhancement | Add all the story quests before the first Team Gemome Base.
Part of both 0.5 and 0.6. | 1.0 | Add Level 1-17 Story Quests - Add all the story quests before the first Team Gemome Base.
Part of both 0.5 and 0.6. | priority | add level story quests add all the story quests before the first team gemome base part of both and | 1 |
813,918 | 30,479,235,598 | IssuesEvent | 2023-07-17 18:56:58 | Green-Party-of-Canada-Members/gpc-decidim | https://api.github.com/repos/Green-Party-of-Canada-Members/gpc-decidim | opened | Ability to leave comment as a "Group" disappears for some proposals | bug priority-high | We had difficulty getting a Process Admin (and Group Admin) to post as [a Group](https://wedecide.green.ca/profiles/Shepherds/members).
We eventually got it working for him, but the option is strangely appearing and disappearing in different proposals... I'm able to reproduce the problem.
Here is the normally correct mode of operation:

Here is an example of this specific group disappearing on some proposals:

In testing with another user in the [same Group](https://wedecide.green.ca/profiles/Shepherds/members), we can both not post as a "Shepherd" to these proposals:
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3777
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3775
We can both post as a "Shepherd" to these proposals:
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3791
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3790
There's one post where he can post as a "Shepherd" and I cannot:
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3774
This group is a moderation group, that is intended to allow moderators to guide proposal development in an official capacity.
Note that it is only the specific group we need that appears to be being affected. Other groups do not disappear as options. | 1.0 | Ability to leave comment as a "Group" disappears for some proposals - We had difficulty getting a Process Admin (and Group Admin) to post as [a Group](https://wedecide.green.ca/profiles/Shepherds/members).
We eventually got it working for him, but the option is strangely appearing and disappearing in different proposals... I'm able to reproduce the problem.
Here is the normally correct mode of operation:

Here is an example of this specific group disappearing on some proposals:

In testing with another user in the [same Group](https://wedecide.green.ca/profiles/Shepherds/members), we can both not post as a "Shepherd" to these proposals:
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3777
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3775
We can both post as a "Shepherd" to these proposals:
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3791
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3790
There's one post where he can post as a "Shepherd" and I cannot:
https://wedecide.green.ca/processes/create-proposals/f/394/proposals/3774
This group is a moderation group, that is intended to allow moderators to guide proposal development in an official capacity.
Note that it is only the specific group we need that appears to be being affected. Other groups do not disappear as options. | priority | ability to leave comment as a group disappears for some proposals we had difficulty getting a process admin and group admin to post as we eventually got it working for him but the option is strangely appearing and disappearing in different proposals i m able to reproduce the problem here is the normally correct mode of operation here is an example of this specific group disappearing on some proposals in testing with another user in the we can both not post as a shepherd to these proposals we can both post as a shepherd to these proposals there s one post where he can post as a shepherd and i cannot this group is a moderation group that is intended to allow moderators to guide proposal development in an official capacity note that it is only the specific group we need that appears to be being affected other groups do not disappear as options | 1 |
305,296 | 9,367,625,327 | IssuesEvent | 2019-04-03 06:22:18 | openbmc/openbmc-test-automation | https://api.github.com/repos/openbmc/openbmc-test-automation | closed | Redfish local user management - manual and automation test | Priority High | Automated below test cases:
====================
- [x] 7 ) Delete a redfish user with admin privilege role and verify its successfully deleted
A) Delete a redfish user with admin privilege role
Command to create user : Refer to Delete_User ( End of Document ) with RoleId as "Administrator"
B) Try to login after deleting the user
C) It should report an error
Status : PASS
Gerrit link : https://gerrit.openbmc-project.xyz/#/c/openbmc/openbmc-test-automation/+/19326
- [x] 8 ) Delete a redfish user with operator privilege role and verify its successfully deleted
A) Delete a redfish user with operator privilege role
Command to create user : Refer to Delete_User ( End of Document ) with RoleId as "Operator"
B) Try to login after deleting the user
C) It should report an error
Status : PASS
Gerrit link : https://gerrit.openbmc-project.xyz/#/c/openbmc/openbmc-test-automation/+/19326
- [x] 9 ) Delete a redfish user with user privilege role and verify its successfully deleted
A) Delete a redfish user with user privilege role
Command to create user : Refer to Delete_User ( End of Document ) with RoleId as "User"
B) Try to login after deleting the user
C) It should report an error
Status : PASS
Gerrit link : https://gerrit.openbmc-project.xyz/#/c/openbmc/openbmc-test-automation/+/19326
- [x] 10) Delete a redfish user with callback privilege role and verify its successfully deleted
A) Delete a redfish user with callback privilege role
Command to create user : Refer to Delete_User ( End of Document ) with RoleId as "Callback"
B) Try to login after deleting the user
C) It should report an error
Status : PASS
Gerrit link : https://gerrit.openbmc-project.xyz/#/c/openbmc/openbmc-test-automation/+/19326
| 1.0 | Redfish local user management - manual and automation test - Automated below test cases:
====================
- [x] 7 ) Delete a redfish user with admin privilege role and verify its successfully deleted
A) Delete a redfish user with admin privilege role
Command to create user : Refer to Delete_User ( End of Document ) with RoleId as "Administrator"
B) Try to login after deleting the user
C) It should report an error
Status : PASS
Gerrit link : https://gerrit.openbmc-project.xyz/#/c/openbmc/openbmc-test-automation/+/19326
- [x] 8 ) Delete a redfish user with operator privilege role and verify its successfully deleted
A) Delete a redfish user with operator privilege role
Command to create user : Refer to Delete_User ( End of Document ) with RoleId as "Operator"
B) Try to login after deleting the user
C) It should report an error
Status : PASS
Gerrit link : https://gerrit.openbmc-project.xyz/#/c/openbmc/openbmc-test-automation/+/19326
- [x] 9 ) Delete a redfish user with user privilege role and verify its successfully deleted
A) Delete a redfish user with user privilege role
Command to create user : Refer to Delete_User ( End of Document ) with RoleId as "User"
B) Try to login after deleting the user
C) It should report an error
Status : PASS
Gerrit link : https://gerrit.openbmc-project.xyz/#/c/openbmc/openbmc-test-automation/+/19326
- [x] 10) Delete a redfish user with callback privilege role and verify its successfully deleted
A) Delete a redfish user with callback privilege role
Command to create user : Refer to Delete_User ( End of Document ) with RoleId as "Callback"
B) Try to login after deleting the user
C) It should report an error
Status : PASS
Gerrit link : https://gerrit.openbmc-project.xyz/#/c/openbmc/openbmc-test-automation/+/19326
| priority | redfish local user management manual and automation test automated below test cases delete a redfish user with admin privilege role and verify its successfully deleted a delete a redfish user with admin privilege role command to create user refer to delete user end of document with roleid as administrator b try to login after deleting the user c it should report an error status pass gerrit link delete a redfish user with operator privilege role and verify its successfully deleted a delete a redfish user with operator privilege role command to create user refer to delete user end of document with roleid as operator b try to login after deleting the user c it should report an error status pass gerrit link delete a redfish user with user privilege role and verify its successfully deleted a delete a redfish user with user privilege role command to create user refer to delete user end of document with roleid as user b try to login after deleting the user c it should report an error status pass gerrit link delete a redfish user with callback privilege role and verify its successfully deleted a delete a redfish user with callback privilege role command to create user refer to delete user end of document with roleid as callback b try to login after deleting the user c it should report an error status pass gerrit link | 1 |
249,905 | 7,965,256,706 | IssuesEvent | 2018-07-14 05:33:58 | RoboJackets/robocup-software | https://api.github.com/repos/RoboJackets/robocup-software | opened | Path Planner doesn't move in the correct direction at the beginning of the path | area / planning-motion exp / master (4) priority / high status / researching type / bug | When a path planner repeatedly replans the path while the robot is require to do some sort turn early in on the path, it will never actually converge to a solution for some time. Even a small turn such as this will cause problems.

In some of the worst case scenarios, we will just move the complete opposite way slowly while trying to move to a target location behind us.
Right now we are only replaning occasionally so it's not a obvious problem, but this solution isn't perfect since you may get multiple replans in a row causing the problem anyways.
I think the cause is the first waypoint isn't actually far enough away from our current velocity vector so when do the linear interpolation for the first frame, it's basically a the same speed and direction we are currently going. I think just getting some of the turn happening in the first waypoint will solve this problem.
@ashaw596 Thoughts? | 1.0 | Path Planner doesn't move in the correct direction at the beginning of the path - When a path planner repeatedly replans the path while the robot is require to do some sort turn early in on the path, it will never actually converge to a solution for some time. Even a small turn such as this will cause problems.

In some of the worst case scenarios, we will just move the complete opposite way slowly while trying to move to a target location behind us.
Right now we are only replaning occasionally so it's not a obvious problem, but this solution isn't perfect since you may get multiple replans in a row causing the problem anyways.
I think the cause is the first waypoint isn't actually far enough away from our current velocity vector so when do the linear interpolation for the first frame, it's basically a the same speed and direction we are currently going. I think just getting some of the turn happening in the first waypoint will solve this problem.
@ashaw596 Thoughts? | priority | path planner doesn t move in the correct direction at the beginning of the path when a path planner repeatedly replans the path while the robot is require to do some sort turn early in on the path it will never actually converge to a solution for some time even a small turn such as this will cause problems in some of the worst case scenarios we will just move the complete opposite way slowly while trying to move to a target location behind us right now we are only replaning occasionally so it s not a obvious problem but this solution isn t perfect since you may get multiple replans in a row causing the problem anyways i think the cause is the first waypoint isn t actually far enough away from our current velocity vector so when do the linear interpolation for the first frame it s basically a the same speed and direction we are currently going i think just getting some of the turn happening in the first waypoint will solve this problem thoughts | 1 |
258,534 | 8,177,102,444 | IssuesEvent | 2018-08-28 09:39:00 | smartdevicelink/sdl_core | https://api.github.com/repos/smartdevicelink/sdl_core | closed | Need change OnButtonEventNotification logic for OK button | Bug Contributor priority 1: High | ###Bug Report
Need change OnButtonEventNotification logic for OK button
#### Description:
If app_id parameter is present - send OnButtonEventNotification
for OK button to specified application in LIMITED and FULL hmi level
If app_id parameter is not present - send OnButtonEventNotification
for OK button to to all applications in FULL hmi level
#### OS & Version Information
OS/Version:
SDL Core Version:
Testing Against:
| 1.0 | Need change OnButtonEventNotification logic for OK button - ###Bug Report
Need change OnButtonEventNotification logic for OK button
#### Description:
If app_id parameter is present - send OnButtonEventNotification
for OK button to specified application in LIMITED and FULL hmi level
If app_id parameter is not present - send OnButtonEventNotification
for OK button to to all applications in FULL hmi level
#### OS & Version Information
OS/Version:
SDL Core Version:
Testing Against:
| priority | need change onbuttoneventnotification logic for ok button bug report need change onbuttoneventnotification logic for ok button description if app id parameter is present send onbuttoneventnotification for ok button to specified application in limited and full hmi level if app id parameter is not present send onbuttoneventnotification for ok button to to all applications in full hmi level os version information os version sdl core version testing against | 1 |
362,609 | 10,729,886,051 | IssuesEvent | 2019-10-28 16:22:21 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] ICE group is not working with embedded components | bug priority: high | ## Describe the bug
When an ICE tag is set with `iceGroup` for an embedded component the forms still shows all fields in the component
```
<@studio.iceAttr component=component iceGroup="myGroup"/>>
```
## To Reproduce
Steps to reproduce the behavior:
1. Create an embedded component with an `iceGroup`
2. Add ice tags using the `iceGroup`
3. Click on the pencil in preview
## Expected behavior
The form should only show the fields that belong to the given `iceGroup`
## Specs
### Version
3.1.4-SNAPSHOT
| 1.0 | [studio-ui] ICE group is not working with embedded components - ## Describe the bug
When an ICE tag is set with `iceGroup` for an embedded component the forms still shows all fields in the component
```
<@studio.iceAttr component=component iceGroup="myGroup"/>>
```
## To Reproduce
Steps to reproduce the behavior:
1. Create an embedded component with an `iceGroup`
2. Add ice tags using the `iceGroup`
3. Click on the pencil in preview
## Expected behavior
The form should only show the fields that belong to the given `iceGroup`
## Specs
### Version
3.1.4-SNAPSHOT
| priority | ice group is not working with embedded components describe the bug when an ice tag is set with icegroup for an embedded component the forms still shows all fields in the component to reproduce steps to reproduce the behavior create an embedded component with an icegroup add ice tags using the icegroup click on the pencil in preview expected behavior the form should only show the fields that belong to the given icegroup specs version snapshot | 1 |
257,842 | 8,142,764,714 | IssuesEvent | 2018-08-21 08:43:09 | omni-compiler/omni-compiler | https://api.github.com/repos/omni-compiler/omni-compiler | closed | parameter文で型変換があるとき誤変換 | Kind: Bug Module: F_Front Module: F_Trans Priority: High | 報告者:岩下
倍精度実数のparameter変数に対して、単精度定数で定義しているとき、
誤動作または結果異常が起こる。
以下の例に対し、
program main
real(8),parameter :: sft=20.
call foo(res, sft)
write(*,*) res
end program main
トランスレータ出力は、call文を次のように変換している。
(F_Frontの出力の段階でこうしている?)
call foo( res , 20. )
これでは、呼出し側は real*4 へのポインタを渡し、呼ばれ側は real*8 への
ポインタだと思ってアクセスするため、システムによっては結果異常になる。
値(の字面)に変換する必要があるのか?
変換するのなら、castが必要。
call foo( res , real( 20. , 8 ) )
一般の数式中に現れる場合も、castしないと精度誤差が出る恐れがある。
実際、NASParallel CG のCAF版が、HA-PACSで VERIFICATION ERRORになる
理由はこれだった。
NASParallel では以下のようなパラメタ定義が自動生成されていて、
parameter( na=75000,
> nonzer=13,
> niter=75,
> shift=60.,
> rcond=1.0d-1 )
変数 shift は double precision である。
このパラメタは初期値作成ルーチンに引数渡しされている。
> shift=60.d0,
のようにソース変更すると、VERIFICATION SUCCESSFUL になった。
| 1.0 | parameter文で型変換があるとき誤変換 - 報告者:岩下
倍精度実数のparameter変数に対して、単精度定数で定義しているとき、
誤動作または結果異常が起こる。
以下の例に対し、
program main
real(8),parameter :: sft=20.
call foo(res, sft)
write(*,*) res
end program main
トランスレータ出力は、call文を次のように変換している。
(F_Frontの出力の段階でこうしている?)
call foo( res , 20. )
これでは、呼出し側は real*4 へのポインタを渡し、呼ばれ側は real*8 への
ポインタだと思ってアクセスするため、システムによっては結果異常になる。
値(の字面)に変換する必要があるのか?
変換するのなら、castが必要。
call foo( res , real( 20. , 8 ) )
一般の数式中に現れる場合も、castしないと精度誤差が出る恐れがある。
実際、NASParallel CG のCAF版が、HA-PACSで VERIFICATION ERRORになる
理由はこれだった。
NASParallel では以下のようなパラメタ定義が自動生成されていて、
parameter( na=75000,
> nonzer=13,
> niter=75,
> shift=60.,
> rcond=1.0d-1 )
変数 shift は double precision である。
このパラメタは初期値作成ルーチンに引数渡しされている。
> shift=60.d0,
のようにソース変更すると、VERIFICATION SUCCESSFUL になった。
| priority | parameter文で型変換があるとき誤変換 報告者:岩下 倍精度実数のparameter変数に対して、単精度定数で定義しているとき、 誤動作または結果異常が起こる。 以下の例に対し、 program main real parameter sft call foo res sft write res end program main トランスレータ出力は、call文を次のように変換している。 (f frontの出力の段階でこうしている?) call foo res これでは、呼出し側は real へのポインタを渡し、呼ばれ側は real への ポインタだと思ってアクセスするため、システムによっては結果異常になる。 値(の字面)に変換する必要があるのか? 変換するのなら、castが必要。 call foo res real 一般の数式中に現れる場合も、castしないと精度誤差が出る恐れがある。 実際、nasparallel cg のcaf版が、ha pacsで verification errorになる 理由はこれだった。 nasparallel では以下のようなパラメタ定義が自動生成されていて、 parameter na nonzer niter shift rcond 変数 shift は double precision である。 このパラメタは初期値作成ルーチンに引数渡しされている。 shift のようにソース変更すると、verification successful になった。 | 1 |
160,633 | 6,101,218,169 | IssuesEvent | 2017-06-20 14:13:30 | kuzzleio/kuzzle | https://api.github.com/repos/kuzzleio/kuzzle | closed | Add a graceful shutdown command line | changelog:enhancements priority-high | We need a command asking Kuzzle to:
1. send an instruction to the proxy, asking it to stop sending new requests
2. process the remaining pending requests
3. shut itself down when the request queue is empty
This command would allow us to patch & restart Kuzzle nodes one by one, cleanly, without any downtime client-wise. | 1.0 | Add a graceful shutdown command line - We need a command asking Kuzzle to:
1. send an instruction to the proxy, asking it to stop sending new requests
2. process the remaining pending requests
3. shut itself down when the request queue is empty
This command would allow us to patch & restart Kuzzle nodes one by one, cleanly, without any downtime client-wise. | priority | add a graceful shutdown command line we need a command asking kuzzle to send an instruction to the proxy asking it to stop sending new requests process the remaining pending requests shut itself down when the request queue is empty this command would allow us to patch restart kuzzle nodes one by one cleanly without any downtime client wise | 1 |
333,494 | 10,127,175,767 | IssuesEvent | 2019-08-01 09:37:10 | prisma/prisma2 | https://api.github.com/repos/prisma/prisma2 | closed | prisma2 commands do not recognize environment variables. | bug/2-confirmed kind/bug priority/high release/preview4 | Hi,
I have a `project.prisma` file beginning with:
```
datasource db {
provider = "postgres"
url = env("POSTGRES_URL")
}
```
However, when I run either:
`POSTGRES_URL=hello_world prisma2 generate`
or
`export POSTGRES_URL=hello_world; prisma2 generate`
I get the following errors:
<img width="714" alt="Screen Shot 2019-07-02 at 3 44 27 PM" src="https://user-images.githubusercontent.com/7659261/60504952-4b36ca00-9ce0-11e9-8e75-9c8a241a05fa.png">
---
Additionally, running the commands:
`export POSTGRES_URL=hello_world prisma2 generate`
OR
`export POSTGRES_URL=hello_world prisma2 dev`
Both return with no console output. | 1.0 | prisma2 commands do not recognize environment variables. - Hi,
I have a `project.prisma` file beginning with:
```
datasource db {
provider = "postgres"
url = env("POSTGRES_URL")
}
```
However, when I run either:
`POSTGRES_URL=hello_world prisma2 generate`
or
`export POSTGRES_URL=hello_world; prisma2 generate`
I get the following errors:
<img width="714" alt="Screen Shot 2019-07-02 at 3 44 27 PM" src="https://user-images.githubusercontent.com/7659261/60504952-4b36ca00-9ce0-11e9-8e75-9c8a241a05fa.png">
---
Additionally, running the commands:
`export POSTGRES_URL=hello_world prisma2 generate`
OR
`export POSTGRES_URL=hello_world prisma2 dev`
Both return with no console output. | priority | commands do not recognize environment variables hi i have a project prisma file beginning with datasource db provider postgres url env postgres url however when i run either postgres url hello world generate or export postgres url hello world generate i get the following errors img width alt screen shot at pm src additionally running the commands export postgres url hello world generate or export postgres url hello world dev both return with no console output | 1 |
115,912 | 4,689,558,030 | IssuesEvent | 2016-10-11 01:05:10 | CS2103AUG2016-T10-C2/main | https://api.github.com/repos/CS2103AUG2016-T10-C2/main | closed | Rewrite error messages for add, edit, list, delete commands | priority.high type.task | Refer to tasks/task manager instead of person/addressbook | 1.0 | Rewrite error messages for add, edit, list, delete commands - Refer to tasks/task manager instead of person/addressbook | priority | rewrite error messages for add edit list delete commands refer to tasks task manager instead of person addressbook | 1 |
694,775 | 23,829,385,503 | IssuesEvent | 2022-09-05 18:24:27 | Tautulli/Tautulli | https://api.github.com/repos/Tautulli/Tautulli | closed | LIBRARY STATISTICS No stats to show | type:bug-regression priority:high topic:server fixed | ### Describe the Bug
I have checked all the options for Library Statistics in the home page settings, but the home page still does not show the Library Statistics entry, and it says No stats to show.
I don't have any problems with my All Libraries, they are ready to be refreshed and have the correct number of entries.
The problem is only with Library Statistics



### Steps to Reproduce
_No response_
### Expected Behavior
I hope this issues will solve my problem of LIBRARY STATISTICS not showing data.
### Screenshots
_No response_
### Relevant Settings
_No response_
### Tautulli Version
2.10.3
### Git Branch
master
### Git Commit Hash
444b138e97a272e110fcb4364e8864348eee71c3
### Platform and Version
DSM6.2 Docker Linux 4.4.59+
### Python Version
3.9.13
### Browser and Version
Microsoft EDGE 104.0.1293.54
### Link to Logs
| 1.0 | LIBRARY STATISTICS No stats to show - ### Describe the Bug
I have checked all the options for Library Statistics in the home page settings, but the home page still does not show the Library Statistics entry, and it says No stats to show.
I don't have any problems with my All Libraries, they are ready to be refreshed and have the correct number of entries.
The problem is only with Library Statistics



### Steps to Reproduce
_No response_
### Expected Behavior
I hope this issues will solve my problem of LIBRARY STATISTICS not showing data.
### Screenshots
_No response_
### Relevant Settings
_No response_
### Tautulli Version
2.10.3
### Git Branch
master
### Git Commit Hash
444b138e97a272e110fcb4364e8864348eee71c3
### Platform and Version
DSM6.2 Docker Linux 4.4.59+
### Python Version
3.9.13
### Browser and Version
Microsoft EDGE 104.0.1293.54
### Link to Logs
| priority | library statistics no stats to show describe the bug i have checked all the options for library statistics in the home page settings but the home page still does not show the library statistics entry and it says no stats to show i don t have any problems with my all libraries they are ready to be refreshed and have the correct number of entries the problem is only with library statistics steps to reproduce no response expected behavior i hope this issues will solve my problem of library statistics not showing data screenshots no response relevant settings no response tautulli version git branch master git commit hash platform and version docker linux python version browser and version microsoft edge link to logs | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.