Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
658,961
| 21,913,663,577
|
IssuesEvent
|
2022-05-21 13:13:19
|
DIT113-V22/group-01
|
https://api.github.com/repos/DIT113-V22/group-01
|
closed
|
Set the cars spawn position in the emulator
|
wontfix 8 points Low priority
|
As a user, I want to be able to spawn my car at different locations in the emulator, so that I can start driving at different locations
**Acceptance Criteria:**
- [ ] Research is done whether it is remotely easy to change the cars’ position with an emulator extension
- [ ] If setting the cars’ position is feasible, allow the app to communicate to the emulator, maybe via a mqtt topic?
- [ ] When opening the practice mode show a popup with predefined spawn locations
- [ ] When choosing one of them close the popup and set the cars position
- [ ] Add a way to open the spawn point popup manually for if the user changes their mind
|
1.0
|
Set the cars spawn position in the emulator - As a user, I want to be able to spawn my car at different locations in the emulator, so that I can start driving at different locations
**Acceptance Criteria:**
- [ ] Research is done whether it is remotely easy to change the cars’ position with an emulator extension
- [ ] If setting the cars’ position is feasible, allow the app to communicate to the emulator, maybe via a mqtt topic?
- [ ] When opening the practice mode show a popup with predefined spawn locations
- [ ] When choosing one of them close the popup and set the cars position
- [ ] Add a way to open the spawn point popup manually for if the user changes their mind
|
non_defect
|
set the cars spawn position in the emulator as a user i want to be able to spawn my car at different locations in the emulator so that i can start driving at different locations acceptance criteria research is done whether it is remotely easy to change the cars’ position with an emulator extension if setting the cars’ position is feasible allow the app to communicate to the emulator maybe via a mqtt topic when opening the practice mode show a popup with predefined spawn locations when choosing one of them close the popup and set the cars position add a way to open the spawn point popup manually for if the user changes their mind
| 0
|
32,092
| 6,713,949,305
|
IssuesEvent
|
2017-10-13 15:07:54
|
zotonic/zotonic
|
https://api.github.com/repos/zotonic/zotonic
|
closed
|
If default admin user password is admin you can login with any password
|
defect
|
If the password of the default admin password is admin you can login into the admin page with any password.
I am using the 1.0-dev version running on docker.
If any further information is required, please let me known.
|
1.0
|
If default admin user password is admin you can login with any password - If the password of the default admin password is admin you can login into the admin page with any password.
I am using the 1.0-dev version running on docker.
If any further information is required, please let me known.
|
defect
|
if default admin user password is admin you can login with any password if the password of the default admin password is admin you can login into the admin page with any password i am using the dev version running on docker if any further information is required please let me known
| 1
|
49,299
| 13,186,596,173
|
IssuesEvent
|
2020-08-13 00:41:15
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
neutrino-generator index.dox should be converted to index.rst (Trac #1146)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1146">https://code.icecube.wisc.edu/ticket/1146</a>, reported by jtatar and owned by Kotoyo Hoshina</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"description": "Convert the Doxygen created index.dox file to a Sphinx documentation file index.rst. Only Sphinx (.rst) files get build as part of the online documentation. This is why currently there is no documentation for neutrino-generator to be found at http://software.icecube.wisc.edu/simulation_trunk/",
"reporter": "jtatar",
"cc": "",
"resolution": "fixed",
"_ts": "1550067215093672",
"component": "combo simulation",
"summary": "neutrino-generator index.dox should be converted to index.rst",
"priority": "blocker",
"keywords": "",
"time": "2015-08-18T00:13:26",
"milestone": "",
"owner": "Kotoyo Hoshina",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
neutrino-generator index.dox should be converted to index.rst (Trac #1146) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1146">https://code.icecube.wisc.edu/ticket/1146</a>, reported by jtatar and owned by Kotoyo Hoshina</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"description": "Convert the Doxygen created index.dox file to a Sphinx documentation file index.rst. Only Sphinx (.rst) files get build as part of the online documentation. This is why currently there is no documentation for neutrino-generator to be found at http://software.icecube.wisc.edu/simulation_trunk/",
"reporter": "jtatar",
"cc": "",
"resolution": "fixed",
"_ts": "1550067215093672",
"component": "combo simulation",
"summary": "neutrino-generator index.dox should be converted to index.rst",
"priority": "blocker",
"keywords": "",
"time": "2015-08-18T00:13:26",
"milestone": "",
"owner": "Kotoyo Hoshina",
"type": "defect"
}
```
</p>
</details>
|
defect
|
neutrino generator index dox should be converted to index rst trac migrated from json status closed changetime description convert the doxygen created index dox file to a sphinx documentation file index rst only sphinx rst files get build as part of the online documentation this is why currently there is no documentation for neutrino generator to be found at reporter jtatar cc resolution fixed ts component combo simulation summary neutrino generator index dox should be converted to index rst priority blocker keywords time milestone owner kotoyo hoshina type defect
| 1
|
45,054
| 12,530,643,043
|
IssuesEvent
|
2020-06-04 13:24:13
|
google/pywebsocket
|
https://api.github.com/repos/google/pywebsocket
|
closed
|
How to deploy standalone.py on shared webservers?
|
Priority-Medium Type-Defect auto-migrated
|
```
Can someone elaborate on how to deploy standalone.py on shared webservers?
```
Original issue reported on code.google.com by `wlati...@gmail.com` on 29 Nov 2014 at 9:41
|
1.0
|
How to deploy standalone.py on shared webservers? - ```
Can someone elaborate on how to deploy standalone.py on shared webservers?
```
Original issue reported on code.google.com by `wlati...@gmail.com` on 29 Nov 2014 at 9:41
|
defect
|
how to deploy standalone py on shared webservers can someone elaborate on how to deploy standalone py on shared webservers original issue reported on code google com by wlati gmail com on nov at
| 1
|
31,631
| 6,562,484,186
|
IssuesEvent
|
2017-09-07 16:45:33
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
MOOSE's camelcase to underscore logic can fail
|
C: MOOSE Low Difficulty P: minor T: defect
|
### Description of the enhancement or error report
We ran into a problem where the underscores could be placed in an unexpected place when multiple capital letters are used in a row - See #9415. This same RegEx exists in MooseUtils and needs to be fixed as well.
### Rationale for the enhancement or information for reproducing the error
You'll need to use the dynamic load capability to make sure this is working correctly. You could either dynamic load an object or an app. See existing tests as a guide:
https://github.com/idaholab/moose/tree/devel/modules/misc/tests/dynamic_loading
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
Minor - dynamic loading is not used routinely but this will fail with at least one internal application.
@rwcarlsen - you fixed the last one, this one should be very easy as well. You'll need to make a new test though.
|
1.0
|
MOOSE's camelcase to underscore logic can fail - ### Description of the enhancement or error report
We ran into a problem where the underscores could be placed in an unexpected place when multiple capital letters are used in a row - See #9415. This same RegEx exists in MooseUtils and needs to be fixed as well.
### Rationale for the enhancement or information for reproducing the error
You'll need to use the dynamic load capability to make sure this is working correctly. You could either dynamic load an object or an app. See existing tests as a guide:
https://github.com/idaholab/moose/tree/devel/modules/misc/tests/dynamic_loading
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
Minor - dynamic loading is not used routinely but this will fail with at least one internal application.
@rwcarlsen - you fixed the last one, this one should be very easy as well. You'll need to make a new test though.
|
defect
|
moose s camelcase to underscore logic can fail description of the enhancement or error report we ran into a problem where the underscores could be placed in an unexpected place when multiple capital letters are used in a row see this same regex exists in mooseutils and needs to be fixed as well rationale for the enhancement or information for reproducing the error you ll need to use the dynamic load capability to make sure this is working correctly you could either dynamic load an object or an app see existing tests as a guide identified impact i e internal object changes limited interface changes public api change or a list of specific applications impacted minor dynamic loading is not used routinely but this will fail with at least one internal application rwcarlsen you fixed the last one this one should be very easy as well you ll need to make a new test though
| 1
|
58,587
| 24,495,723,507
|
IssuesEvent
|
2022-10-10 08:30:12
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
az servicebus namespace list fails in subscription with no namespaces
|
bug Service Bus customer-reported Client needs-team-attention CXP Attention Auto-Assign
|
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
When calling `az servicebus namespace list` in a subscription that does not contain any service bus namespaces, an error is returned instead of an empty array.
**Command Name**
`az servicebus namespace list`
**Errors:**
```
The command failed with an unexpected error. Here is the traceback:
'NoneType' object is not iterable
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 233, in invoke
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 663, in execute
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 710, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/paging.py", line 129, in __next__
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/paging.py", line 84, in __next__
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/mgmt/servicebus/v2022_01_01_preview/operations/_namespaces_operations.py", line 747, in extract_data
TypeError: 'NoneType' object is not iterable
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- Select subscription that does not contain any Service Bus namespaces
- `az servicebus namespace list`
## Expected Behavior
An empty json array should be returned
## Environment Summary
```
Windows-10-10.0.19042-SP0
Python 3.10.5
Installer: MSI
azure-cli 2.40.0
Extensions:
application-insights 0.1.16
azure-devops 0.25.0
logic 0.1.6
virtual-wan 0.2.13
Dependencies:
msal 1.18.0b1
azure-mgmt-resource 21.1.0b1
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
1.0
|
az servicebus namespace list fails in subscription with no namespaces - ### **This is autogenerated. Please review and update as needed.**
## Describe the bug
When calling `az servicebus namespace list` in a subscription that does not contain any service bus namespaces, an error is returned instead of an empty array.
**Command Name**
`az servicebus namespace list`
**Errors:**
```
The command failed with an unexpected error. Here is the traceback:
'NoneType' object is not iterable
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 233, in invoke
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 663, in execute
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 710, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/paging.py", line 129, in __next__
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/core/paging.py", line 84, in __next__
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/mgmt/servicebus/v2022_01_01_preview/operations/_namespaces_operations.py", line 747, in extract_data
TypeError: 'NoneType' object is not iterable
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- Select subscription that does not contain any Service Bus namespaces
- `az servicebus namespace list`
## Expected Behavior
An empty json array should be returned
## Environment Summary
```
Windows-10-10.0.19042-SP0
Python 3.10.5
Installer: MSI
azure-cli 2.40.0
Extensions:
application-insights 0.1.16
azure-devops 0.25.0
logic 0.1.6
virtual-wan 0.2.13
Dependencies:
msal 1.18.0b1
azure-mgmt-resource 21.1.0b1
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
|
non_defect
|
az servicebus namespace list fails in subscription with no namespaces this is autogenerated please review and update as needed describe the bug when calling az servicebus namespace list in a subscription that does not contain any service bus namespaces an error is returned instead of an empty array command name az servicebus namespace list errors the command failed with an unexpected error here is the traceback nonetype object is not iterable traceback most recent call last file d a s build scripts windows artifacts cli lib site packages knack cli py line in invoke file d a s build scripts windows artifacts cli lib site packages azure cli core commands init py line in execute file d a s build scripts windows artifacts cli lib site packages azure cli core commands init py line in run jobs serially file d a s build scripts windows artifacts cli lib site packages azure cli core commands init py line in run job file d a s build scripts windows artifacts cli lib site packages azure core paging py line in next file d a s build scripts windows artifacts cli lib site packages azure core paging py line in next file d a s build scripts windows artifacts cli lib site packages azure mgmt servicebus preview operations namespaces operations py line in extract data typeerror nonetype object is not iterable to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information select subscription that does not contain any service bus namespaces az servicebus namespace list expected behavior an empty json array should be returned environment summary windows python installer msi azure cli extensions application insights azure devops logic virtual wan dependencies msal azure mgmt resource additional context
| 0
|
86,230
| 24,795,896,273
|
IssuesEvent
|
2022-10-24 17:13:30
|
openmc-dev/openmc
|
https://api.github.com/repos/openmc-dev/openmc
|
opened
|
`openmc.deplete` incompatible with Apple M1
|
Build System
|
When trying to import `openmc.deplete` on a Mac computer with the Apple M1 chip, the following error occurs:
```
Traceback (most recent call last):
File "/Users/kkiesling/software/opt/depletion-comparison/pwr/openmc/run_depletion.py", line 5, in <module>
import openmc.deplete
File "/Users/kkiesling/software/opt/openmc/openmc/deplete/__init__.py", line 11, in <module>
from .coupled_operator import *
File "/Users/kkiesling/software/opt/openmc/openmc/deplete/coupled_operator.py", line 21, in <module>
import openmc.lib
File "/Users/kkiesling/software/opt/openmc/openmc/lib/__init__.py", line 32, in <module>
_dll = CDLL(_filename)
File "/Users/kkiesling/opt/anaconda3/envs/openmc-dev/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen(/Users/kkiesling/software/opt/openmc/openmc/lib/libopenmc.dylib, 0x0006): tried: '/Users/kkiesling/software/opt/openmc/openmc/lib/libopenmc.dylib' (mach-o file, but is an incompatible architecture (have (arm64), need (x86_64)))
```
I have no issue importing other standard openmc modules to build a model and run an eigenvalue problem. It is just this one module (could be others but I haven't encountered the error elsewhere). OpenMC compiles fine, this error occurs at runtime.
Other info/things I have tried:
* I am compiling from source with the latest develop branch.
* I have played around with explicitly setting different compilers (tried gcc/g++ and clang/clang++, both from various locations on my computer).
* I have tried explicitly setting the `CMAKE_OSX_ARCHITECTURES` to "arm64" (what it is using) and "x86_64" per some google solutions but trying to set the architecture to the latter causes other errors and fails during build.
I am out of ideas for how to fix this on my end and I am not even sure if this is something that can be addressed by OpenMC (maybe there is something to set in CMake but my searching has led to no solutions on this front).
Has anyone else experienced this error with M1 and know how to get around it?
|
1.0
|
`openmc.deplete` incompatible with Apple M1 - When trying to import `openmc.deplete` on a Mac computer with the Apple M1 chip, the following error occurs:
```
Traceback (most recent call last):
File "/Users/kkiesling/software/opt/depletion-comparison/pwr/openmc/run_depletion.py", line 5, in <module>
import openmc.deplete
File "/Users/kkiesling/software/opt/openmc/openmc/deplete/__init__.py", line 11, in <module>
from .coupled_operator import *
File "/Users/kkiesling/software/opt/openmc/openmc/deplete/coupled_operator.py", line 21, in <module>
import openmc.lib
File "/Users/kkiesling/software/opt/openmc/openmc/lib/__init__.py", line 32, in <module>
_dll = CDLL(_filename)
File "/Users/kkiesling/opt/anaconda3/envs/openmc-dev/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen(/Users/kkiesling/software/opt/openmc/openmc/lib/libopenmc.dylib, 0x0006): tried: '/Users/kkiesling/software/opt/openmc/openmc/lib/libopenmc.dylib' (mach-o file, but is an incompatible architecture (have (arm64), need (x86_64)))
```
I have no issue importing other standard openmc modules to build a model and run an eigenvalue problem. It is just this one module (could be others but I haven't encountered the error elsewhere). OpenMC compiles fine, this error occurs at runtime.
Other info/things I have tried:
* I am compiling from source with the latest develop branch.
* I have played around with explicitly setting different compilers (tried gcc/g++ and clang/clang++, both from various locations on my computer).
* I have tried explicitly setting the `CMAKE_OSX_ARCHITECTURES` to "arm64" (what it is using) and "x86_64" per some google solutions but trying to set the architecture to the latter causes other errors and fails during build.
I am out of ideas for how to fix this on my end and I am not even sure if this is something that can be addressed by OpenMC (maybe there is something to set in CMake but my searching has led to no solutions on this front).
Has anyone else experienced this error with M1 and know how to get around it?
|
non_defect
|
openmc deplete incompatible with apple when trying to import openmc deplete on a mac computer with the apple chip the following error occurs traceback most recent call last file users kkiesling software opt depletion comparison pwr openmc run depletion py line in import openmc deplete file users kkiesling software opt openmc openmc deplete init py line in from coupled operator import file users kkiesling software opt openmc openmc deplete coupled operator py line in import openmc lib file users kkiesling software opt openmc openmc lib init py line in dll cdll filename file users kkiesling opt envs openmc dev lib ctypes init py line in init self handle dlopen self name mode oserror dlopen users kkiesling software opt openmc openmc lib libopenmc dylib tried users kkiesling software opt openmc openmc lib libopenmc dylib mach o file but is an incompatible architecture have need i have no issue importing other standard openmc modules to build a model and run an eigenvalue problem it is just this one module could be others but i haven t encountered the error elsewhere openmc compiles fine this error occurs at runtime other info things i have tried i am compiling from source with the latest develop branch i have played around with explicitly setting different compilers tried gcc g and clang clang both from various locations on my computer i have tried explicitly setting the cmake osx architectures to what it is using and per some google solutions but trying to set the architecture to the latter causes other errors and fails during build i am out of ideas for how to fix this on my end and i am not even sure if this is something that can be addressed by openmc maybe there is something to set in cmake but my searching has led to no solutions on this front has anyone else experienced this error with and know how to get around it
| 0
|
101,947
| 31,771,836,414
|
IssuesEvent
|
2023-09-12 12:18:05
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
Building tf-opt steps with prerequisites
|
awaiting review stat:awaiting tensorflower type:feature type:build/install comp:lite TF 2.12
|
<details><summary>Click to expand!</summary>
### Issue Type
Documentation Feature Request
### Have you reproduced the bug with TF nightly?
Yes
### Source
source
### Tensorflow Version
2.12
### Custom Code
Yes
### OS Platform and Distribution
Linux Ubunto 18.04
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
6.1.2
### GCC/Compiler version
9.2.0
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current Behaviour?
I am trying to build tf-opt binary on branch v2.12 without any changes and gets different compilation errors. The command for compilation I use:
`bazel build -c opt tensorflow/compiler/mlir:tf-opt`
Can you share some prerequites for building and debugging `tf-opt` binary (for debug/release mode). I would appriciate if there is docker builder I can use to it instead of changing my envrioment.
Thanks,
Aviad
### Standalone code to reproduce the issue
```shell
ERROR: /localdrive/users/aviadco/community/tensorflow/tensorflow/lite/experimental/acceleration/configuration/BUILD:36:8: Executing genrule //tensorflow/lite/experimental/acceleration/configuration:configuration_schema failed: (Exit 1): bash failed: error executing command (from target //tensorflow/lite/experimental/acceleration/configuration:configuration_schema) /bin/bash -c ... (remaining 1 argument skipped)
bazel-out/k8-opt-exec-50AE0418/bin/external/flatbuffers/flatc: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by bazel-out/k8-opt-exec-50AE0418/bin/external/flatbuffers/flatc)
ERROR: /localdrive/users/aviadco/community/tensorflow/tensorflow/lite/schema/BUILD:184:22: Generating flatbuffer files for conversion_metadata_fbs_srcs: //tensorflow/lite/schema:conversion_metadata_fbs_srcs failed: (Exit 1): bash failed: error executing command (from target //tensorflow/lite/schema:conversion_metadata_fbs_srcs) /bin/bash -c ... (remaining 1 argument skipped)
bazel-out/k8-opt-exec-50AE0418/bin/external/flatbuffers/flatc: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by bazel-out/k8-opt-exec-50AE0418/bin/external/flatbuffers/flatc)
Target //tensorflow/compiler/mlir:tf-opt failed to build
```
### Relevant log output
_No response_</details>
|
1.0
|
Building tf-opt steps with prerequisites - <details><summary>Click to expand!</summary>
### Issue Type
Documentation Feature Request
### Have you reproduced the bug with TF nightly?
Yes
### Source
source
### Tensorflow Version
2.12
### Custom Code
Yes
### OS Platform and Distribution
Linux Ubunto 18.04
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
6.1.2
### GCC/Compiler version
9.2.0
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current Behaviour?
I am trying to build tf-opt binary on branch v2.12 without any changes and gets different compilation errors. The command for compilation I use:
`bazel build -c opt tensorflow/compiler/mlir:tf-opt`
Can you share some prerequites for building and debugging `tf-opt` binary (for debug/release mode). I would appriciate if there is docker builder I can use to it instead of changing my envrioment.
Thanks,
Aviad
### Standalone code to reproduce the issue
```shell
ERROR: /localdrive/users/aviadco/community/tensorflow/tensorflow/lite/experimental/acceleration/configuration/BUILD:36:8: Executing genrule //tensorflow/lite/experimental/acceleration/configuration:configuration_schema failed: (Exit 1): bash failed: error executing command (from target //tensorflow/lite/experimental/acceleration/configuration:configuration_schema) /bin/bash -c ... (remaining 1 argument skipped)
bazel-out/k8-opt-exec-50AE0418/bin/external/flatbuffers/flatc: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by bazel-out/k8-opt-exec-50AE0418/bin/external/flatbuffers/flatc)
ERROR: /localdrive/users/aviadco/community/tensorflow/tensorflow/lite/schema/BUILD:184:22: Generating flatbuffer files for conversion_metadata_fbs_srcs: //tensorflow/lite/schema:conversion_metadata_fbs_srcs failed: (Exit 1): bash failed: error executing command (from target //tensorflow/lite/schema:conversion_metadata_fbs_srcs) /bin/bash -c ... (remaining 1 argument skipped)
bazel-out/k8-opt-exec-50AE0418/bin/external/flatbuffers/flatc: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by bazel-out/k8-opt-exec-50AE0418/bin/external/flatbuffers/flatc)
Target //tensorflow/compiler/mlir:tf-opt failed to build
```
### Relevant log output
_No response_</details>
|
non_defect
|
building tf opt steps with prerequisites click to expand issue type documentation feature request have you reproduced the bug with tf nightly yes source source tensorflow version custom code yes os platform and distribution linux ubunto mobile device no response python version no response bazel version gcc compiler version cuda cudnn version no response gpu model and memory no response current behaviour i am trying to build tf opt binary on branch without any changes and gets different compilation errors the command for compilation i use bazel build c opt tensorflow compiler mlir tf opt can you share some prerequites for building and debugging tf opt binary for debug release mode i would appriciate if there is docker builder i can use to it instead of changing my envrioment thanks aviad standalone code to reproduce the issue shell error localdrive users aviadco community tensorflow tensorflow lite experimental acceleration configuration build executing genrule tensorflow lite experimental acceleration configuration configuration schema failed exit bash failed error executing command from target tensorflow lite experimental acceleration configuration configuration schema bin bash c remaining argument skipped bazel out opt exec bin external flatbuffers flatc usr lib linux gnu libstdc so version glibcxx not found required by bazel out opt exec bin external flatbuffers flatc error localdrive users aviadco community tensorflow tensorflow lite schema build generating flatbuffer files for conversion metadata fbs srcs tensorflow lite schema conversion metadata fbs srcs failed exit bash failed error executing command from target tensorflow lite schema conversion metadata fbs srcs bin bash c remaining argument skipped bazel out opt exec bin external flatbuffers flatc usr lib linux gnu libstdc so version glibcxx not found required by bazel out opt exec bin external flatbuffers flatc target tensorflow compiler mlir tf opt failed to build relevant log output no response
| 0
|
124,550
| 4,927,119,618
|
IssuesEvent
|
2016-11-26 15:12:22
|
keep-the-lights-on/keep-the-lights-on
|
https://api.github.com/repos/keep-the-lights-on/keep-the-lights-on
|
opened
|
Contribution Effects
|
priority-5 user-story
|
_From @ivanmauricio on November 10, 2016 10:47_
I want to be shown what FAC does with my contributions.
_Copied from original issue: foundersandcoders/keep-the-lights-on#3_
|
1.0
|
Contribution Effects - _From @ivanmauricio on November 10, 2016 10:47_
I want to be shown what FAC does with my contributions.
_Copied from original issue: foundersandcoders/keep-the-lights-on#3_
|
non_defect
|
contribution effects from ivanmauricio on november i want to be shown what fac does with my contributions copied from original issue foundersandcoders keep the lights on
| 0
|
17,376
| 3,002,400,414
|
IssuesEvent
|
2015-07-24 16:59:49
|
GoldenSoftwareLtd/gedemin
|
https://api.github.com/repos/GoldenSoftwareLtd/gedemin
|
closed
|
Наследование форм для TgdcValue
|
GedeminExe Inheritance Priority-Medium Type-Defect
|
Originally reported on Google Code with ID 3599
```
Делаю наследника от единиц измерения GD_VALUE. Локализованное имя - ОКЕИ. В Исследователе
он отображается правильно, с именем ОКЕИ. Формы просмотра и редактирования наз. Единицы
измерения (почему не ОКЕИ тоже?). В наследнике только одно поле - Код, но почему-то
в форму редактирования оно не попало (компонент для работы с ним не создан).
```
Reported by `alexandra.gsoftware` on 2015-06-08 18:54:19
|
1.0
|
Наследование форм для TgdcValue - Originally reported on Google Code with ID 3599
```
Делаю наследника от единиц измерения GD_VALUE. Локализованное имя - ОКЕИ. В Исследователе
он отображается правильно, с именем ОКЕИ. Формы просмотра и редактирования наз. Единицы
измерения (почему не ОКЕИ тоже?). В наследнике только одно поле - Код, но почему-то
в форму редактирования оно не попало (компонент для работы с ним не создан).
```
Reported by `alexandra.gsoftware` on 2015-06-08 18:54:19
|
defect
|
наследование форм для tgdcvalue originally reported on google code with id делаю наследника от единиц измерения gd value локализованное имя океи в исследователе он отображается правильно с именем океи формы просмотра и редактирования наз единицы измерения почему не океи тоже в наследнике только одно поле код но почему то в форму редактирования оно не попало компонент для работы с ним не создан reported by alexandra gsoftware on
| 1
|
57,177
| 15,725,770,499
|
IssuesEvent
|
2021-03-29 10:24:04
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
False positive with --style: Uninitialized member variable (when stream is used) (Trac #32)
|
False positive Incomplete Migration Migrated from Trac defect noone
|
Migrated from https://trac.cppcheck.net/ticket/32
```json
{
"status": "closed",
"changetime": "2009-01-17T20:19:12",
"description": "$ ./cppcheck test.cpp -s\nChecking test.cpp: ...\n[test.cpp:6]: Uninitialized member variable 'Foo::foo'\n\n{{{\n#include <fstream>\n\nclass Foo {\n int foo;\n public:\n Foo(std::istream &in) { if(!(in >> foo)) throw 0; }\n};\n}}}",
"reporter": "aggro80",
"cc": "",
"resolution": "fixed",
"_ts": "1232223552000000",
"component": "False positive",
"summary": "False positive with --style: Uninitialized member variable (when stream is used)",
"priority": "",
"keywords": "",
"time": "2009-01-17T19:16:48",
"milestone": "1.28",
"owner": "noone",
"type": "defect"
}
```
|
1.0
|
False positive with --style: Uninitialized member variable (when stream is used) (Trac #32) - Migrated from https://trac.cppcheck.net/ticket/32
```json
{
"status": "closed",
"changetime": "2009-01-17T20:19:12",
"description": "$ ./cppcheck test.cpp -s\nChecking test.cpp: ...\n[test.cpp:6]: Uninitialized member variable 'Foo::foo'\n\n{{{\n#include <fstream>\n\nclass Foo {\n int foo;\n public:\n Foo(std::istream &in) { if(!(in >> foo)) throw 0; }\n};\n}}}",
"reporter": "aggro80",
"cc": "",
"resolution": "fixed",
"_ts": "1232223552000000",
"component": "False positive",
"summary": "False positive with --style: Uninitialized member variable (when stream is used)",
"priority": "",
"keywords": "",
"time": "2009-01-17T19:16:48",
"milestone": "1.28",
"owner": "noone",
"type": "defect"
}
```
|
defect
|
false positive with style uninitialized member variable when stream is used trac migrated from json status closed changetime description cppcheck test cpp s nchecking test cpp n uninitialized member variable foo foo n n n include n nclass foo n int foo n public n foo std istream in if in foo throw n n reporter cc resolution fixed ts component false positive summary false positive with style uninitialized member variable when stream is used priority keywords time milestone owner noone type defect
| 1
|
59,900
| 17,023,284,162
|
IssuesEvent
|
2021-07-03 01:13:32
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
client dies silently and does not put back job
|
Component: tilesathome Priority: minor Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 2.09pm, Monday, 11th August 2008]**
When running a t@h-client I got the following messages during download (as I had problems with the internet connection):
Aug 9 23:09:52 vorlon tilesGen[1019]: [#2 0% Preproc] Downloading: Map data to /home/tiles/tilesAtHome3/tmp/data-1017-1-6.osm (slice 6 of 10)...
Aug 9 23:31:06 vorlon tilesGen[1019]: Transfer truncated: only 152881 out of 511379 bytes received
The client dies silently and does not put the job back to the server.
It would be better to put the job back and try the next one.
|
1.0
|
client dies silently and does not put back job - **[Submitted to the original trac issue database at 2.09pm, Monday, 11th August 2008]**
When running a t@h-client I got the following messages during download (as I had problems with the internet connection):
Aug 9 23:09:52 vorlon tilesGen[1019]: [#2 0% Preproc] Downloading: Map data to /home/tiles/tilesAtHome3/tmp/data-1017-1-6.osm (slice 6 of 10)...
Aug 9 23:31:06 vorlon tilesGen[1019]: Transfer truncated: only 152881 out of 511379 bytes received
The client dies silently and does not put the job back to the server.
It would be better to put the job back and try the next one.
|
defect
|
client dies silently and does not put back job when running a t h client i got the following messages during download as i had problems with the internet connection aug vorlon tilesgen downloading map data to home tiles tmp data osm slice of aug vorlon tilesgen transfer truncated only out of bytes received the client dies silently and does not put the job back to the server it would be better to put the job back and try the next one
| 1
|
140,857
| 11,364,114,980
|
IssuesEvent
|
2020-01-27 07:15:34
|
terraform-providers/terraform-provider-google
|
https://api.github.com/repos/terraform-providers/terraform-provider-google
|
opened
|
Fix `_updateBigquerySink` tests
|
test failure
|
`TestAccLoggingBillingAccountSink_updateBigquerySink`
`TestAccLoggingFolderSink_updateBigquerySink`
`TestAccLoggingOrganizationSink_updateBigquerySink`
|
1.0
|
Fix `_updateBigquerySink` tests - `TestAccLoggingBillingAccountSink_updateBigquerySink`
`TestAccLoggingFolderSink_updateBigquerySink`
`TestAccLoggingOrganizationSink_updateBigquerySink`
|
non_defect
|
fix updatebigquerysink tests testaccloggingbillingaccountsink updatebigquerysink testaccloggingfoldersink updatebigquerysink testaccloggingorganizationsink updatebigquerysink
| 0
|
80,261
| 30,201,191,091
|
IssuesEvent
|
2023-07-05 05:56:29
|
vector-im/element-ios
|
https://api.github.com/repos/vector-im/element-ios
|
closed
|
Keyboard blocks text input on chats after pressing send
|
T-Defect A-Composer S-Major O-Uncommon os:iOS17
|
### Steps to reproduce
Steps to reproduce
1. Open element
2. open conversation
3. send a message
### Outcome
#### What did you expect?
Text input box to stay on top of keyboard to allow subsequent messages.
#### What happened instead?
Text input box drops behind keyboard and does not allow input without leaving the conversation and returning in. (Swiping down on chat doesn’t dismiss the keyboard)
### Your phone model
iPhone 13
### Operating system version
IOS 17 Beta 2
### Application version
Element 1.10.14
### Homeserver
Element Web 1.11.36
### Will you send logs?
No
|
1.0
|
Keyboard blocks text input on chats after pressing send - ### Steps to reproduce
Steps to reproduce
1. Open element
2. open conversation
3. send a message
### Outcome
#### What did you expect?
Text input box to stay on top of keyboard to allow subsequent messages.
#### What happened instead?
Text input box drops behind keyboard and does not allow input without leaving the conversation and returning in. (Swiping down on chat doesn’t dismiss the keyboard)
### Your phone model
iPhone 13
### Operating system version
IOS 17 Beta 2
### Application version
Element 1.10.14
### Homeserver
Element Web 1.11.36
### Will you send logs?
No
|
defect
|
keyboard blocks text input on chats after pressing send steps to reproduce steps to reproduce open element open conversation send a message outcome what did you expect text input box to stay on top of keyboard to allow subsequent messages what happened instead text input box drops behind keyboard and does not allow input without leaving the conversation and returning in swiping down on chat doesn’t dismiss the keyboard your phone model iphone operating system version ios beta application version element homeserver element web will you send logs no
| 1
|
27,089
| 5,313,424,325
|
IssuesEvent
|
2017-02-13 12:09:31
|
berlinonline/converjon
|
https://api.github.com/repos/berlinonline/converjon
|
closed
|
default.yml overwrites options from urls-defined-yml
|
documentation question
|
If the special yml (e.g. akzente.yml) with proper defined urls-section comes first and the default yml (without onw urls-section) (e.g. bo.yml) comes later:
Then all options from special yml are overwritten.
Test it:
* akzente.yml with constraints -> width -> max 1440
* bo.yml with constraints -> width -> max 1000
Then request image fpr special akzente with width 1440: you will get error "violated constraint width:1000"
|
1.0
|
default.yml overwrites options from urls-defined-yml - If the special yml (e.g. akzente.yml) with proper defined urls-section comes first and the default yml (without onw urls-section) (e.g. bo.yml) comes later:
Then all options from special yml are overwritten.
Test it:
* akzente.yml with constraints -> width -> max 1440
* bo.yml with constraints -> width -> max 1000
Then request image fpr special akzente with width 1440: you will get error "violated constraint width:1000"
|
non_defect
|
default yml overwrites options from urls defined yml if the special yml e g akzente yml with proper defined urls section comes first and the default yml without onw urls section e g bo yml comes later then all options from special yml are overwritten test it akzente yml with constraints width max bo yml with constraints width max then request image fpr special akzente with width you will get error violated constraint width
| 0
|
279,030
| 30,702,437,939
|
IssuesEvent
|
2023-07-27 01:30:12
|
panasalap/linux-4.1.15
|
https://api.github.com/repos/panasalap/linux-4.1.15
|
closed
|
CVE-2023-1074 (Medium) detected in linux179e72b561d3d331c850e1a5779688d7a7de5246 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2023-1074 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux179e72b561d3d331c850e1a5779688d7a7de5246</b></p></summary>
<p>
<p>Linux kernel stable tree mirror</p>
<p>Library home page: <a href=https://github.com/gregkh/linux.git>https://github.com/gregkh/linux.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sctp/bind_addr.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sctp/bind_addr.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak flaw was found in the Linux kernel's Stream Control Transmission Protocol. This issue may occur when a user starts a malicious networking service and someone connects to this service. This could allow a local user to starve resources, causing a denial of service.
<p>Publish Date: 2023-03-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1074>CVE-2023-1074</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1074">https://www.linuxkernelcves.com/cves/CVE-2023-1074</a></p>
<p>Release Date: 2023-02-28</p>
<p>Fix Resolution: v4.14.305,v4.19.272,v5.4.231,v5.10.166,v5.15.91,v6.1.9,v6.2-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-1074 (Medium) detected in linux179e72b561d3d331c850e1a5779688d7a7de5246 - autoclosed - ## CVE-2023-1074 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux179e72b561d3d331c850e1a5779688d7a7de5246</b></p></summary>
<p>
<p>Linux kernel stable tree mirror</p>
<p>Library home page: <a href=https://github.com/gregkh/linux.git>https://github.com/gregkh/linux.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sctp/bind_addr.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sctp/bind_addr.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak flaw was found in the Linux kernel's Stream Control Transmission Protocol. This issue may occur when a user starts a malicious networking service and someone connects to this service. This could allow a local user to starve resources, causing a denial of service.
<p>Publish Date: 2023-03-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1074>CVE-2023-1074</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1074">https://www.linuxkernelcves.com/cves/CVE-2023-1074</a></p>
<p>Release Date: 2023-02-28</p>
<p>Fix Resolution: v4.14.305,v4.19.272,v5.4.231,v5.10.166,v5.15.91,v6.1.9,v6.2-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in autoclosed cve medium severity vulnerability vulnerable library linux kernel stable tree mirror library home page a href found in base branch master vulnerable source files net sctp bind addr c net sctp bind addr c vulnerability details a memory leak flaw was found in the linux kernel s stream control transmission protocol this issue may occur when a user starts a malicious networking service and someone connects to this service this could allow a local user to starve resources causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
286,939
| 8,796,302,750
|
IssuesEvent
|
2018-12-23 04:30:28
|
kubernetes/website
|
https://api.github.com/repos/kubernetes/website
|
closed
|
Issue with k8s.io/docs/setup/independent/high-availability/
|
lifecycle/rotten priority/awaiting-more-evidence sig/cluster-lifecycle
|
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!--Required Information-->
**This is a...**
<!-- choose one by changing [ ] to [x] -->
- [ ] Feature Request
- [x] Bug Report
**Problem:**
Getting in to issues while following the kubernetes HA cluster creation documentation. Creating the cluster in CentOS 7.5.1804.
kube* versions. v1.11.1
The cluster initialization command is running and its giving the following error.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.11.0
- k8s.gcr.io/kube-controller-manager-amd64:v1.11.0
- k8s.gcr.io/kube-scheduler-amd64:v1.11.0
- k8s.gcr.io/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
The "kube-apiserver" container logs has the error "F0725 23:41:25.373090 1 storage_decorator.go:57] Unable to create storage backend: config (&{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true false 1000 0xc420244f00 <nil> 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: connect: connection refused)"
The "kube-controller" container log has this information "failed to create listener: failed to listen on 127.0.0.1:10252: listen tcp 127.0.0.1:10252: bind: address already in use"
**Proposed Solution:**
**Page to Update:**
https://kubernetes.io/...
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
|
1.0
|
Issue with k8s.io/docs/setup/independent/high-availability/ - <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!--Required Information-->
**This is a...**
<!-- choose one by changing [ ] to [x] -->
- [ ] Feature Request
- [x] Bug Report
**Problem:**
Getting in to issues while following the kubernetes HA cluster creation documentation. Creating the cluster in CentOS 7.5.1804.
kube* versions. v1.11.1
The cluster initialization command is running and its giving the following error.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.11.0
- k8s.gcr.io/kube-controller-manager-amd64:v1.11.0
- k8s.gcr.io/kube-scheduler-amd64:v1.11.0
- k8s.gcr.io/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
The "kube-apiserver" container logs has the error "F0725 23:41:25.373090 1 storage_decorator.go:57] Unable to create storage backend: config (&{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true false 1000 0xc420244f00 <nil> 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: connect: connection refused)"
The "kube-controller" container log has this information "failed to create listener: failed to listen on 127.0.0.1:10252: listen tcp 127.0.0.1:10252: bind: address already in use"
**Proposed Solution:**
**Page to Update:**
https://kubernetes.io/...
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
|
non_defect
|
issue with io docs setup independent high availability this is a feature request bug report problem getting in to issues while following the kubernetes ha cluster creation documentation creating the cluster in centos kube versions the cluster initialization command is running and its giving the following error the http call equal to curl ssl failed with error get dial tcp connect connection refused unfortunately an error has occurred timed out waiting for the condition this error is likely caused by the kubelet is not running the kubelet is unhealthy due to a misconfiguration of the node in some way required cgroups disabled no internet connection is available so the kubelet cannot pull or find the following control plane images gcr io kube apiserver gcr io kube controller manager gcr io kube scheduler gcr io etcd you can check or miligate this in beforehand with kubeadm config images pull to make sure the images are downloaded locally and cached if you are on a systemd powered system you can try to troubleshoot the error with the following commands systemctl status kubelet journalctl xeu kubelet additionally a control plane component may have crashed or exited when started by the container runtime to troubleshoot list all containers using your preferred container runtimes cli e g docker here is one example how you may list all kubernetes containers running in docker docker ps a grep kube grep v pause once you have found the failing container you can inspect its logs with docker logs containerid couldn t initialize a kubernetes cluster the kube apiserver container logs has the error storage decorator go unable to create storage backend config registry etc kubernetes pki apiserver etcd client key etc kubernetes pki apiserver etcd client crt etc kubernetes pki etcd ca crt true false err dial tcp connect connection refused the kube controller container log has this information failed to create listener failed to listen on listen tcp bind address already in use proposed solution page to update
| 0
|
78,518
| 27,567,216,642
|
IssuesEvent
|
2023-03-08 05:32:15
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Words of messages in some languages on search result not highlighted
|
T-Defect
|
### Steps to reproduce
Run the test below on `timeline.spec.ts`.
````
it("should highlight search result words of various languages", () => {
// "Test" in Arabic, Hebrew, and Hindi
const stringAr = "اِمْتِحَان";
const stringHe = "מִבְחָן";
const stringHi = "आज़माइश";
cy.visit("/#/room/" + roomId);
// Wait until configuration is finished
cy.contains(
".mx_RoomView_body .mx_GenericEventListSummary .mx_GenericEventListSummary_summary",
"created and configured the room.",
).should("exist");
// Arabic
cy.sendEvent(roomId, null, "m.room.message" as EventType, {
msgtype: "m.text" as MsgType,
body: stringAr,
});
// Hebrew
cy.sendEvent(roomId, null, "m.room.message" as EventType, {
msgtype: "m.text" as MsgType,
body: stringHe,
});
// Hindi
cy.sendEvent(roomId, null, "m.room.message" as EventType, {
msgtype: "m.text" as MsgType,
body: stringHi,
});
// Ensure the last message was sent
cy.get(".mx_EventTile_last .mx_EventTile_receiptSent").should("be.visible");
cy.get(".mx_RoomHeader_searchButton").click();
// Check stringAr is highlighted
cy.get(".mx_SearchBar_input input").clear().invoke("val", stringAr).trigger("input");
cy.get(".mx_SearchBar_input input").type("{enter}");
cy.get(".mx_EventTile:not(.mx_EventTile_contextual) .mx_EventTile_searchHighlight").should("exist");
// Check stringHe is highlighted
cy.get(".mx_SearchBar_input input").clear().invoke("val", stringHe).trigger("input");
cy.get(".mx_SearchBar_input input").type("{enter}");
cy.get(".mx_EventTile:not(.mx_EventTile_contextual) .mx_EventTile_searchHighlight").should("exist");
// Check stringHi is highlighted
cy.get(".mx_SearchBar_input input").clear().invoke("val", stringHi).trigger("input");
cy.get(".mx_SearchBar_input input").type("{enter}");
cy.get(".mx_EventTile:not(.mx_EventTile_contextual) .mx_EventTile_searchHighlight").should("exist");
});
````
### Outcome
#### What did you expect?
The test should run successfully.
#### What happened instead?
It fails due to no results being found.
Please note that the similar test for other non-European languages like Chinese, Japanese, and Korean passes. Also the test for the Hebrew string without symbols (מבחן) also passes as well.
### Operating system
Debian
### Browser information
Electron (cypress)
### URL for webapp
_No response_
### Application version
develop branch
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Words of messages in some languages on search result not highlighted - ### Steps to reproduce
Run the test below on `timeline.spec.ts`.
````
it("should highlight search result words of various languages", () => {
// "Test" in Arabic, Hebrew, and Hindi
const stringAr = "اِمْتِحَان";
const stringHe = "מִבְחָן";
const stringHi = "आज़माइश";
cy.visit("/#/room/" + roomId);
// Wait until configuration is finished
cy.contains(
".mx_RoomView_body .mx_GenericEventListSummary .mx_GenericEventListSummary_summary",
"created and configured the room.",
).should("exist");
// Arabic
cy.sendEvent(roomId, null, "m.room.message" as EventType, {
msgtype: "m.text" as MsgType,
body: stringAr,
});
// Hebrew
cy.sendEvent(roomId, null, "m.room.message" as EventType, {
msgtype: "m.text" as MsgType,
body: stringHe,
});
// Hindi
cy.sendEvent(roomId, null, "m.room.message" as EventType, {
msgtype: "m.text" as MsgType,
body: stringHi,
});
// Ensure the last message was sent
cy.get(".mx_EventTile_last .mx_EventTile_receiptSent").should("be.visible");
cy.get(".mx_RoomHeader_searchButton").click();
// Check stringAr is highlighted
cy.get(".mx_SearchBar_input input").clear().invoke("val", stringAr).trigger("input");
cy.get(".mx_SearchBar_input input").type("{enter}");
cy.get(".mx_EventTile:not(.mx_EventTile_contextual) .mx_EventTile_searchHighlight").should("exist");
// Check stringHe is highlighted
cy.get(".mx_SearchBar_input input").clear().invoke("val", stringHe).trigger("input");
cy.get(".mx_SearchBar_input input").type("{enter}");
cy.get(".mx_EventTile:not(.mx_EventTile_contextual) .mx_EventTile_searchHighlight").should("exist");
// Check stringHi is highlighted
cy.get(".mx_SearchBar_input input").clear().invoke("val", stringHi).trigger("input");
cy.get(".mx_SearchBar_input input").type("{enter}");
cy.get(".mx_EventTile:not(.mx_EventTile_contextual) .mx_EventTile_searchHighlight").should("exist");
});
````
### Outcome
#### What did you expect?
The test should run successfully.
#### What happened instead?
It fails due to no results being found.
Please note that the similar test for other non-European languages like Chinese, Japanese, and Korean passes. Also the test for the Hebrew string without symbols (מבחן) also passes as well.
### Operating system
Debian
### Browser information
Electron (cypress)
### URL for webapp
_No response_
### Application version
develop branch
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
words of messages in some languages on search result not highlighted steps to reproduce run the test below on timeline spec ts it should highlight search result words of various languages test in arabic hebrew and hindi const stringar اِمْتِحَان const stringhe מִבְחָן const stringhi आज़माइश cy visit room roomid wait until configuration is finished cy contains mx roomview body mx genericeventlistsummary mx genericeventlistsummary summary created and configured the room should exist arabic cy sendevent roomid null m room message as eventtype msgtype m text as msgtype body stringar hebrew cy sendevent roomid null m room message as eventtype msgtype m text as msgtype body stringhe hindi cy sendevent roomid null m room message as eventtype msgtype m text as msgtype body stringhi ensure the last message was sent cy get mx eventtile last mx eventtile receiptsent should be visible cy get mx roomheader searchbutton click check stringar is highlighted cy get mx searchbar input input clear invoke val stringar trigger input cy get mx searchbar input input type enter cy get mx eventtile not mx eventtile contextual mx eventtile searchhighlight should exist check stringhe is highlighted cy get mx searchbar input input clear invoke val stringhe trigger input cy get mx searchbar input input type enter cy get mx eventtile not mx eventtile contextual mx eventtile searchhighlight should exist check stringhi is highlighted cy get mx searchbar input input clear invoke val stringhi trigger input cy get mx searchbar input input type enter cy get mx eventtile not mx eventtile contextual mx eventtile searchhighlight should exist outcome what did you expect the test should run successfully what happened instead it fails due to no results being found please note that the similar test for other non european languages like chinese japanese and korean passes also the test for the hebrew string without symbols מבחן also passes as well operating system debian browser information electron cypress url for webapp no response application version develop branch homeserver no response will you send logs no
| 1
|
121,157
| 10,151,919,871
|
IssuesEvent
|
2019-08-05 21:42:11
|
longhorn/longhorn
|
https://api.github.com/repos/longhorn/longhorn
|
closed
|
Disable base image feature for now
|
area/manager area/test area/ui impact upgrade
|
We need to disable base image for manager and manager integration test since the new deployment model doesn't support it currently.
We will reintroduce it later.
|
1.0
|
Disable base image feature for now - We need to disable base image for manager and manager integration test since the new deployment model doesn't support it currently.
We will reintroduce it later.
|
non_defect
|
disable base image feature for now we need to disable base image for manager and manager integration test since the new deployment model doesn t support it currently we will reintroduce it later
| 0
|
357,258
| 10,604,213,738
|
IssuesEvent
|
2019-10-10 17:41:19
|
mozilla-lockwise/lockwise-android
|
https://api.github.com/repos/mozilla-lockwise/lockwise-android
|
closed
|
Disable edit host name
|
effort: XS priority: P1 type: task
|
For the next release, let's disable editing the host name and preventing users from creating duplicates.
We'll want to revisit this functionality in #948
|
1.0
|
Disable edit host name - For the next release, let's disable editing the host name and preventing users from creating duplicates.
We'll want to revisit this functionality in #948
|
non_defect
|
disable edit host name for the next release let s disable editing the host name and preventing users from creating duplicates we ll want to revisit this functionality in
| 0
|
584,979
| 17,468,012,600
|
IssuesEvent
|
2021-08-06 20:03:29
|
WebDevJBR/volunteer-points-log
|
https://api.github.com/repos/WebDevJBR/volunteer-points-log
|
closed
|
Swap search bar parameters
|
High Priority
|
User Portal >> Search Page
Currently, the search bar will only pull up items that have fields that begin with the text entered into the search bar. This needs to be swapped to include items that have fields that contain the text that is entered.
Example:
"Short" is entered into the search bar.
"Malakye Short" exists in the DB but isn't pulled since it doesn't begin with "Short"
|
1.0
|
Swap search bar parameters - User Portal >> Search Page
Currently, the search bar will only pull up items that have fields that begin with the text entered into the search bar. This needs to be swapped to include items that have fields that contain the text that is entered.
Example:
"Short" is entered into the search bar.
"Malakye Short" exists in the DB but isn't pulled since it doesn't begin with "Short"
|
non_defect
|
swap search bar parameters user portal search page currently the search bar will only pull up items that have fields that begin with the text entered into the search bar this needs to be swapped to include items that have fields that contain the text that is entered example short is entered into the search bar malakye short exists in the db but isn t pulled since it doesn t begin with short
| 0
|
238,467
| 26,115,980,918
|
IssuesEvent
|
2022-12-28 06:10:18
|
pingcap/docs
|
https://api.github.com/repos/pingcap/docs
|
closed
|
encryption-and-compression-functions: list of supported functions is incorrect
|
area/security
|
## Error Report
Page: https://docs.pingcap.com/tidb/stable/encryption-and-compression-functions
Related: https://github.com/pingcap/tidb/issues/2632
Some functions that are only available in MySQL Enterprise have not yet been implemented in TiDB. However many of these are listed in the _Supported functions_ section.
Possible solutions:
- Rename the section and add a column to list a function as supported, unsupported or deprecated.
- Remove the asymmetric encryption functions from the list of supported functions
- Implement the functions in TiDB
```
sql> SELECT tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v6.4.0
Edition: Community
Git Commit Hash: cf36a9ce2fe1039db3cf3444d51930b887df18a1
Git Branch: heads/refs/tags/v6.4.0
UTC Build Time: 2022-11-13 05:25:30
GoVersion: go1.19.2
Race Enabled: false
TiKV Min Version: 6.2.0-alpha
Check Table Before Drop: false
Store: tikv
1 row in set (0.0012 sec)
sql> SELECT create_dh_parameters(1024);
ERROR: 1305 (42000): FUNCTION test.create_dh_parameters does not exist
```
|
True
|
encryption-and-compression-functions: list of supported functions is incorrect - ## Error Report
Page: https://docs.pingcap.com/tidb/stable/encryption-and-compression-functions
Related: https://github.com/pingcap/tidb/issues/2632
Some functions that are only available in MySQL Enterprise have not yet been implemented in TiDB. However many of these are listed in the _Supported functions_ section.
Possible solutions:
- Rename the section and add a column to list a function as supported, unsupported or deprecated.
- Remove the asymmetric encryption functions from the list of supported functions
- Implement the functions in TiDB
```
sql> SELECT tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: v6.4.0
Edition: Community
Git Commit Hash: cf36a9ce2fe1039db3cf3444d51930b887df18a1
Git Branch: heads/refs/tags/v6.4.0
UTC Build Time: 2022-11-13 05:25:30
GoVersion: go1.19.2
Race Enabled: false
TiKV Min Version: 6.2.0-alpha
Check Table Before Drop: false
Store: tikv
1 row in set (0.0012 sec)
sql> SELECT create_dh_parameters(1024);
ERROR: 1305 (42000): FUNCTION test.create_dh_parameters does not exist
```
|
non_defect
|
encryption and compression functions list of supported functions is incorrect error report page related some functions that are only available in mysql enterprise have not yet been implemented in tidb however many of these are listed in the supported functions section possible solutions rename the section and add a column to list a function as supported unsupported or deprecated remove the asymmetric encryption functions from the list of supported functions implement the functions in tidb sql select tidb version g row tidb version release version edition community git commit hash git branch heads refs tags utc build time goversion race enabled false tikv min version alpha check table before drop false store tikv row in set sec sql select create dh parameters error function test create dh parameters does not exist
| 0
|
129,945
| 12,421,131,716
|
IssuesEvent
|
2020-05-23 15:25:03
|
MrMic/mon_site_web
|
https://api.github.com/repos/MrMic/mon_site_web
|
closed
|
Il manque le fichier readme
|
documentation
|
Pas de Readme dans le projet. On ne comprend pas ce que fait le projet.
|
1.0
|
Il manque le fichier readme - Pas de Readme dans le projet. On ne comprend pas ce que fait le projet.
|
non_defect
|
il manque le fichier readme pas de readme dans le projet on ne comprend pas ce que fait le projet
| 0
|
350,185
| 24,972,036,148
|
IssuesEvent
|
2022-11-02 02:42:09
|
WordPress/Documentation-Issue-Tracker
|
https://api.github.com/repos/WordPress/Documentation-Issue-Tracker
|
closed
|
Twenty Twenty-Three: User Support Guide
|
user documentation new document high priority 6.1
|
## What is the new page you are requesting?
A user guide for the Twenty Twenty-Three theme and its style variations.
## How will this new page help you?
With a new default theme coming with WordPress 6.1, users will find a getting-started guide helpful. It could include information like:
- How Twenty Twenty-Three works with Full Site Editing, including what templates and patterns are included
- How style variations work and what the included style variations look like
- How to make common customizations
- An overview of some of the features in TT3 that take advantage of WP 6.1 improvements, such as fluid typography
In terms of the format, we can draw inspiration from this existing guide for TT2:
https://wordpress.org/support/article/twenty-twenty-two/
## General
- [x] Make sure all screenshots are relevant to the latest version
- [x] Make sure videos are up to date, if any
- [x] Add ALT tags for the images
- [x] Make sure the headings are in sentence case
- [x] Convert all reusable blocks to a ‘regular block’.
|
1.0
|
Twenty Twenty-Three: User Support Guide - ## What is the new page you are requesting?
A user guide for the Twenty Twenty-Three theme and its style variations.
## How will this new page help you?
With a new default theme coming with WordPress 6.1, users will find a getting-started guide helpful. It could include information like:
- How Twenty Twenty-Three works with Full Site Editing, including what templates and patterns are included
- How style variations work and what the included style variations look like
- How to make common customizations
- An overview of some of the features in TT3 that take advantage of WP 6.1 improvements, such as fluid typography
In terms of the format, we can draw inspiration from this existing guide for TT2:
https://wordpress.org/support/article/twenty-twenty-two/
## General
- [x] Make sure all screenshots are relevant to the latest version
- [x] Make sure videos are up to date, if any
- [x] Add ALT tags for the images
- [x] Make sure the headings are in sentence case
- [x] Convert all reusable blocks to a ‘regular block’.
|
non_defect
|
twenty twenty three user support guide what is the new page you are requesting a user guide for the twenty twenty three theme and its style variations how will this new page help you with a new default theme coming with wordpress users will find a getting started guide helpful it could include information like how twenty twenty three works with full site editing including what templates and patterns are included how style variations work and what the included style variations look like how to make common customizations an overview of some of the features in that take advantage of wp improvements such as fluid typography in terms of the format we can draw inspiration from this existing guide for general make sure all screenshots are relevant to the latest version make sure videos are up to date if any add alt tags for the images make sure the headings are in sentence case convert all reusable blocks to a ‘regular block’
| 0
|
20,678
| 3,398,371,761
|
IssuesEvent
|
2015-12-02 03:12:20
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
re-execute query
|
Defect
|
Not sure if this is the desired behavior but I didn't find something in the docs about the issue.
The following example will end in an endless loop:
```php
$this->loadModel('Variations');
$query = $this->Variations->find()
->contain(['Products', 'Crawlers'])
->limit(100);
$page = 1;
while($variations = $query->page($page)->toArray()) {
$page++;
}
```
|
1.0
|
re-execute query - Not sure if this is the desired behavior but I didn't find something in the docs about the issue.
The following example will end in an endless loop:
```php
$this->loadModel('Variations');
$query = $this->Variations->find()
->contain(['Products', 'Crawlers'])
->limit(100);
$page = 1;
while($variations = $query->page($page)->toArray()) {
$page++;
}
```
|
defect
|
re execute query not sure if this is the desired behavior but i didn t find something in the docs about the issue the following example will end in an endless loop php this loadmodel variations query this variations find contain limit page while variations query page page toarray page
| 1
|
23,198
| 3,776,050,705
|
IssuesEvent
|
2016-03-17 15:30:43
|
buildo/github-prettifier
|
https://api.github.com/repos/buildo/github-prettifier
|
opened
|
WIP/InReview and SyncIssueLabels tasks may get in conflict and cause an infinite loop
|
defect
|
## description
If, for some reason, `WIP/InReview` and `SyncIssueLabels` have systematically different outcomes (ex: one adds "WIP" and removes "InReview" while the other does the opposite) the prettifier enters an infinite loop.
This shouldn't happen with a correct usage of our flow, but we should avoid it by design nevertheless
## how to reproduce
- open a PR associated to an issue with `hophop gh pr`
- merge the PR on master using the terminal
- in the merge commit **don't** put "(closes #{ISSUE_ID})
## specs
`SyncIssueLabels` should ignore "WIP" and "InReview" labels as they should be edited only by `WIP/InReview` task
## misc
{optional: other useful info}
|
1.0
|
WIP/InReview and SyncIssueLabels tasks may get in conflict and cause an infinite loop - ## description
If, for some reason, `WIP/InReview` and `SyncIssueLabels` have systematically different outcomes (ex: one adds "WIP" and removes "InReview" while the other does the opposite) the prettifier enters an infinite loop.
This shouldn't happen with a correct usage of our flow, but we should avoid it by design nevertheless
## how to reproduce
- open a PR associated to an issue with `hophop gh pr`
- merge the PR on master using the terminal
- in the merge commit **don't** put "(closes #{ISSUE_ID})
## specs
`SyncIssueLabels` should ignore "WIP" and "InReview" labels as they should be edited only by `WIP/InReview` task
## misc
{optional: other useful info}
|
defect
|
wip inreview and syncissuelabels tasks may get in conflict and cause an infinite loop description if for some reason wip inreview and syncissuelabels have systematically different outcomes ex one adds wip and removes inreview while the other does the opposite the prettifier enters an infinite loop this shouldn t happen with a correct usage of our flow but we should avoid it by design nevertheless how to reproduce open a pr associated to an issue with hophop gh pr merge the pr on master using the terminal in the merge commit don t put closes issue id specs syncissuelabels should ignore wip and inreview labels as they should be edited only by wip inreview task misc optional other useful info
| 1
|
1,948
| 2,603,973,903
|
IssuesEvent
|
2015-02-24 19:00:58
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳沈阳怎样治疱疹
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳沈阳怎样治疱疹〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:08
|
1.0
|
沈阳沈阳怎样治疱疹 - ```
沈阳沈阳怎样治疱疹〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:08
|
defect
|
沈阳沈阳怎样治疱疹 沈阳沈阳怎样治疱疹〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位� �� 。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 original issue reported on code google com by gmail com on jun at
| 1
|
81,772
| 31,563,254,698
|
IssuesEvent
|
2023-09-03 14:04:07
|
nats-io/nats-server
|
https://api.github.com/repos/nats-io/nats-server
|
opened
|
Frequent leader election
|
defect
|
### What version were you using?
Nats v2.9.21 deployed with k8s helm chart 1.0.2
### What environment was the server running in?
3 nodes cluster on kubenetes on gke
### Is this defect reproducible?
Currently, I'm experiencing this issue in a nats cluster, but I was not able to reproduce it in a controlled way.
The nats cluster is subject to very frequent leader election on multiple stream/consumers, this is currenly happening about every 10 minuts and seems to be releated to messages published on a specific stream.
Here is some logs of our cluster
```
nats-1 nats [7] 2023/09/03 13:33:13.255560 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 2.163297052s
nats-1 nats [7] 2023/09/03 13:33:14.993498 [INF] JetStream cluster new metadata leader: nats-2/nats
nats-1 nats [7] 2023/09/03 13:33:15.036558 [INF] JetStream cluster new consumer leader for '$G > ingestion-incidents > ingestion-ingestor'
nats-1 nats [7] 2023/09/03 13:33:16.239724 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:33:16.286293 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:33:16.286508 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:33:16.286966 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:33:16.287024 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:33:16.287062 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:33:16.287353 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:33:16.287422 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:33:16.287470 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:33:16.287516 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:33:16.287561 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:33:16.287603 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:33:16.287649 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:33:16.287691 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:33:16.287739 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:33:16.287795 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:33:16.287950 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:33:16.288006 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:36:13.739598 [INF] JetStream cluster new metadata leader: nats-0/nats
nats-1 nats [7] 2023/09/03 13:36:13.818959 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:36:13.819163 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:36:13.828987 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:36:13.829572 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:36:13.829938 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:36:13.830436 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:36:13.830705 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:36:13.830966 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:36:13.831390 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:36:13.831631 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:36:13.831840 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:36:13.832012 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:36:13.832211 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:36:13.834015 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:36:13.834296 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:36:13.834480 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:36:13.834863 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:36:13.837032 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:39:10.193660 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 2.04271573s
nats-1 nats [7] 2023/09/03 13:39:14.964867 [INF] JetStream cluster new stream leader for '$G > ingestion-incidents'
nats-1 nats [7] 2023/09/03 13:39:14.972078 [INF] JetStream cluster new stream leader for '$G > ingestor-new-incidents'
nats-1 nats [7] 2023/09/03 13:39:14.975284 [INF] JetStream cluster new stream leader for '$G > simple-ingestion'
nats-1 nats [7] 2023/09/03 13:39:18.828326 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:39:18.828669 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:39:18.829380 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:39:18.829631 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:39:18.830150 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:39:18.830350 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:39:18.831230 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:39:18.831663 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:39:18.831935 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:39:18.832217 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:39:18.832480 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:39:18.832675 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:39:18.832899 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:39:18.833964 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:39:18.834085 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:39:18.834145 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:39:18.834196 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:39:18.834244 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:45:06.079420 [WRN] 10.60.0.164:43324 - cid:79 - Readloop processing time: 4.45332426s
nats-1 nats [7] 2023/09/03 13:45:09.536323 [WRN] 10.60.5.129:6222 - rid:62 - Readloop processing time: 6.020101401s
nats-1 nats [7] 2023/09/03 13:45:09.595332 [WRN] Healthcheck failed: "JetStream has not established contact with a meta leader"
nats-1 nats [7] 2023/09/03 13:45:14.495095 [INF] JetStream cluster new consumer leader for '$G > ingestion-incidents > ingestion-ingestor'
nats-1 nats [7] 2023/09/03 13:45:14.595723 [INF] Self is new JetStream cluster metadata leader
nats-1 nats [7] 2023/09/03 13:45:14.866566 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:45:14.866691 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:45:14.867379 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:45:14.867454 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:45:14.868072 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:45:14.868218 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:45:14.868286 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:45:14.868345 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:45:14.868402 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:45:14.868452 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:45:14.868508 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:45:14.868566 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:45:14.868644 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:45:14.868782 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:45:14.868933 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:45:14.869002 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:45:14.869152 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:45:14.869217 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:10.078187 [INF] JetStream cluster new metadata leader: nats-0/nats
nats-1 nats [7] 2023/09/03 13:48:11.748344 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:48:11.749563 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:48:11.750148 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:11.750851 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:11.815645 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:48:11.870749 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:48:12.216503 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:48:12.225348 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:48:12.236443 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:48:12.236518 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:48:12.242682 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:48:12.242779 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:48:12.242846 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:48:12.242902 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:48:12.242958 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:48:12.243021 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:48:12.243059 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:48:12.243116 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:48:13.661320 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:48:13.662079 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:48:13.662350 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:48:13.662563 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:48:13.663249 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:48:13.663849 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:48:13.664109 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:48:13.664307 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:48:13.664513 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:48:13.664713 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:48:13.665674 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:48:13.666155 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:48:13.672054 [WRN] RAFT [yrzKKRBu - _meta_] Falling behind in health check, commit 295 != applied 294
nats-1 nats [7] 2023/09/03 13:48:13.674184 [WRN] Healthcheck failed: "JetStream is not current with the meta leader"
nats-1 nats [7] 2023/09/03 13:48:13.678049 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:48:13.678242 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:48:13.678393 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:48:13.678452 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:48:13.678527 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:13.678577 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:14.793950 [INF] JetStream cluster new stream leader for '$G > ingestor-new-incidents'
nats-1 nats [7] 2023/09/03 13:50:05.684298 [WRN] 10.60.2.183:43288 - cid:76 - Readloop processing time: 2.848792952s
nats-1 nats [7] 2023/09/03 13:50:07.863299 [WRN] Internal subscription on "$JS.API.CONSUMER.MSG.NEXT.ingestion-raw.ingestion-raw-connector" took too long: 2.389255354s
nats-1 nats [7] 2023/09/03 13:50:07.866545 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 3.395648936s
nats-1 nats [7] 2023/09/03 13:50:10.416347 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 2.544198947s
nats-1 nats [7] 2023/09/03 13:50:12.281150 [WRN] Internal subscription on "$JS.API.CONSUMER.MSG.NEXT.ingestion-raw.ingestion-raw-connector" took too long: 6.591225067s
nats-1 nats [7] 2023/09/03 13:50:12.723268 [WRN] Internal subscription on "$JS.API.CONSUMER.MSG.NEXT.ingestion-raw.ingestion-raw-connector" took too long: 2.298149517s
nats-1 nats [7] 2023/09/03 13:50:12.724072 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 2.299897448s
nats-1 nats [7] 2023/09/03 13:50:12.731959 [WRN] 10.60.2.183:43288 - cid:76 - Readloop processing time: 7.047022009s
nats-1 nats [7] 2023/09/03 13:50:14.757321 [WRN] 10.60.2.183:43288 - cid:76 - Readloop processing time: 2.004911999s
nats-1 nats [7] 2023/09/03 13:51:16.412840 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:51:16.421652 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:51:16.423369 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:51:16.423694 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:51:16.429093 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:51:16.429635 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:51:16.430092 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:51:16.431036 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:51:16.431386 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:51:16.431687 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:51:16.432114 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:51:16.432329 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:51:16.432500 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:51:16.432667 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:51:16.432825 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:51:16.433026 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:51:16.433247 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:51:16.433426 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:54:07.607685 [WRN] Internal subscription on "$JS.API.CONSUMER.MSG.NEXT.ingestion-raw.ingestion-raw-connector" took too long: 2.296400356s
nats-1 nats [7] 2023/09/03 13:54:07.610321 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 3.776439638s
```
The problem seems related to this stream
```
Information for Stream ingestion-incidents created 2023-04-24 09:13:49
Description: Incident coming from ingestion
Subjects: ingestion.incidents.*.*
Replicas: 3
Storage: File
Options:
Retention: Limits
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: 1d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: nats
Leader: nats-0
Replica: nats-1, current, seen 1.29s ago
Replica: nats-2, current, seen 0.36s ago
State:
Messages: 9,952
Bytes: 776 MiB
FirstSeq: 1,629,389 @ 2023-09-02T14:00:24 UTC
LastSeq: 1,639,340 @ 2023-09-03T13:51:28 UTC
Active Consumers: 1
Number of Subjects: 42
```
And this consumer
```
Information for Consumer ingestion-incidents > ingestion-ingestor created 2023-04-24T13:57:43Z
Configuration:
Name: ingestion-ingestor
Pull Mode: true
Deliver Policy: All
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Maximum Deliveries: 10
Max Ack Pending: 1,000
Max Waiting Pulls: 512
Cluster Information:
Name: nats
Leader: nats-0
Replica: nats-1, current, seen 0.54s ago
Replica: nats-2, current, seen 0.55s ago
State:
Last Delivered Message: Consumer sequence: 1,639,697 Stream sequence: 1,639,340 Last delivery: 8m21s ago
Acknowledgment floor: Consumer sequence: 1,639,697 Stream sequence: 1,639,340 Last Ack: 8m21s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Waiting Pulls: 1 of maximum 512
```
Moreover, the consumer client is currently often getting the `no heartbeat received` error, not sure if this is a consequence of the leader change or the cause.
### Given the capability you are leveraging, describe your expectation?
I expect the overall system to be more robust, leader election should not happen very freqeuntly while the system is running.
### Given the expectation, what is the defect you are observing?
Leader election is triggered too frequently, causing the system to be not very reliable.
|
1.0
|
Frequent leader election - ### What version were you using?
Nats v2.9.21 deployed with k8s helm chart 1.0.2
### What environment was the server running in?
3 nodes cluster on kubenetes on gke
### Is this defect reproducible?
Currently, I'm experiencing this issue in a nats cluster, but I was not able to reproduce it in a controlled way.
The nats cluster is subject to very frequent leader election on multiple stream/consumers, this is currenly happening about every 10 minuts and seems to be releated to messages published on a specific stream.
Here is some logs of our cluster
```
nats-1 nats [7] 2023/09/03 13:33:13.255560 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 2.163297052s
nats-1 nats [7] 2023/09/03 13:33:14.993498 [INF] JetStream cluster new metadata leader: nats-2/nats
nats-1 nats [7] 2023/09/03 13:33:15.036558 [INF] JetStream cluster new consumer leader for '$G > ingestion-incidents > ingestion-ingestor'
nats-1 nats [7] 2023/09/03 13:33:16.239724 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:33:16.286293 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:33:16.286508 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:33:16.286966 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:33:16.287024 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:33:16.287062 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:33:16.287353 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:33:16.287422 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:33:16.287470 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:33:16.287516 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:33:16.287561 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:33:16.287603 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:33:16.287649 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:33:16.287691 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:33:16.287739 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:33:16.287795 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:33:16.287950 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:33:16.288006 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:36:13.739598 [INF] JetStream cluster new metadata leader: nats-0/nats
nats-1 nats [7] 2023/09/03 13:36:13.818959 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:36:13.819163 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:36:13.828987 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:36:13.829572 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:36:13.829938 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:36:13.830436 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:36:13.830705 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:36:13.830966 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:36:13.831390 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:36:13.831631 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:36:13.831840 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:36:13.832012 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:36:13.832211 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:36:13.834015 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:36:13.834296 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:36:13.834480 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:36:13.834863 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:36:13.837032 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:39:10.193660 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 2.04271573s
nats-1 nats [7] 2023/09/03 13:39:14.964867 [INF] JetStream cluster new stream leader for '$G > ingestion-incidents'
nats-1 nats [7] 2023/09/03 13:39:14.972078 [INF] JetStream cluster new stream leader for '$G > ingestor-new-incidents'
nats-1 nats [7] 2023/09/03 13:39:14.975284 [INF] JetStream cluster new stream leader for '$G > simple-ingestion'
nats-1 nats [7] 2023/09/03 13:39:18.828326 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:39:18.828669 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:39:18.829380 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:39:18.829631 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:39:18.830150 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:39:18.830350 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:39:18.831230 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:39:18.831663 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:39:18.831935 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:39:18.832217 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:39:18.832480 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:39:18.832675 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:39:18.832899 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:39:18.833964 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:39:18.834085 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:39:18.834145 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:39:18.834196 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:39:18.834244 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:45:06.079420 [WRN] 10.60.0.164:43324 - cid:79 - Readloop processing time: 4.45332426s
nats-1 nats [7] 2023/09/03 13:45:09.536323 [WRN] 10.60.5.129:6222 - rid:62 - Readloop processing time: 6.020101401s
nats-1 nats [7] 2023/09/03 13:45:09.595332 [WRN] Healthcheck failed: "JetStream has not established contact with a meta leader"
nats-1 nats [7] 2023/09/03 13:45:14.495095 [INF] JetStream cluster new consumer leader for '$G > ingestion-incidents > ingestion-ingestor'
nats-1 nats [7] 2023/09/03 13:45:14.595723 [INF] Self is new JetStream cluster metadata leader
nats-1 nats [7] 2023/09/03 13:45:14.866566 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:45:14.866691 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:45:14.867379 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:45:14.867454 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:45:14.868072 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:45:14.868218 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:45:14.868286 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:45:14.868345 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:45:14.868402 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:45:14.868452 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:45:14.868508 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:45:14.868566 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:45:14.868644 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:45:14.868782 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:45:14.868933 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:45:14.869002 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:45:14.869152 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:45:14.869217 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:10.078187 [INF] JetStream cluster new metadata leader: nats-0/nats
nats-1 nats [7] 2023/09/03 13:48:11.748344 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:48:11.749563 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:48:11.750148 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:11.750851 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:11.815645 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:48:11.870749 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:48:12.216503 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:48:12.225348 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:48:12.236443 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:48:12.236518 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:48:12.242682 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:48:12.242779 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:48:12.242846 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:48:12.242902 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:48:12.242958 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:48:12.243021 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:48:12.243059 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:48:12.243116 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:48:13.661320 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:48:13.662079 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:48:13.662350 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:48:13.662563 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:48:13.663249 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:48:13.663849 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:48:13.664109 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:48:13.664307 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:48:13.664513 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:48:13.664713 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:48:13.665674 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:48:13.666155 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:48:13.672054 [WRN] RAFT [yrzKKRBu - _meta_] Falling behind in health check, commit 295 != applied 294
nats-1 nats [7] 2023/09/03 13:48:13.674184 [WRN] Healthcheck failed: "JetStream is not current with the meta leader"
nats-1 nats [7] 2023/09/03 13:48:13.678049 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:48:13.678242 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:48:13.678393 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:48:13.678452 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:48:13.678527 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:13.678577 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:48:14.793950 [INF] JetStream cluster new stream leader for '$G > ingestor-new-incidents'
nats-1 nats [7] 2023/09/03 13:50:05.684298 [WRN] 10.60.2.183:43288 - cid:76 - Readloop processing time: 2.848792952s
nats-1 nats [7] 2023/09/03 13:50:07.863299 [WRN] Internal subscription on "$JS.API.CONSUMER.MSG.NEXT.ingestion-raw.ingestion-raw-connector" took too long: 2.389255354s
nats-1 nats [7] 2023/09/03 13:50:07.866545 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 3.395648936s
nats-1 nats [7] 2023/09/03 13:50:10.416347 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 2.544198947s
nats-1 nats [7] 2023/09/03 13:50:12.281150 [WRN] Internal subscription on "$JS.API.CONSUMER.MSG.NEXT.ingestion-raw.ingestion-raw-connector" took too long: 6.591225067s
nats-1 nats [7] 2023/09/03 13:50:12.723268 [WRN] Internal subscription on "$JS.API.CONSUMER.MSG.NEXT.ingestion-raw.ingestion-raw-connector" took too long: 2.298149517s
nats-1 nats [7] 2023/09/03 13:50:12.724072 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 2.299897448s
nats-1 nats [7] 2023/09/03 13:50:12.731959 [WRN] 10.60.2.183:43288 - cid:76 - Readloop processing time: 7.047022009s
nats-1 nats [7] 2023/09/03 13:50:14.757321 [WRN] 10.60.2.183:43288 - cid:76 - Readloop processing time: 2.004911999s
nats-1 nats [7] 2023/09/03 13:51:16.412840 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:51:16.421652 [INF] JetStream cluster new consumer leader for '$G > alerts-telegram > telegram-sender'
nats-1 nats [7] 2023/09/03 13:51:16.423369 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:51:16.423694 [INF] JetStream cluster new consumer leader for '$G > alerts-digest > digests-sender'
nats-1 nats [7] 2023/09/03 13:51:16.429093 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:51:16.429635 [INF] JetStream cluster new consumer leader for '$G > ingestion-raw > ingestion-raw-connector'
nats-1 nats [7] 2023/09/03 13:51:16.430092 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:51:16.431036 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:51:16.431386 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dns-fuzz-notification'
nats-1 nats [7] 2023/09/03 13:51:16.431687 [INF] JetStream cluster new consumer leader for '$G > domains-fuzz > dnsfuzz-sub-ingestion'
nats-1 nats [7] 2023/09/03 13:51:16.432114 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:51:16.432329 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:51:16.432500 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:51:16.432667 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:51:16.432825 [INF] JetStream cluster new consumer leader for '$G > events > telegram-alerts-tickets-created'
nats-1 nats [7] 2023/09/03 13:51:16.433026 [INF] JetStream cluster new consumer leader for '$G > events > otrs-connector-events'
nats-1 nats [7] 2023/09/03 13:51:16.433247 [INF] JetStream cluster new consumer leader for '$G > events > webhooks-creator'
nats-1 nats [7] 2023/09/03 13:51:16.433426 [INF] JetStream cluster new consumer leader for '$G > events > alerts-reader-for-ticket-closed'
nats-1 nats [7] 2023/09/03 13:54:07.607685 [WRN] Internal subscription on "$JS.API.CONSUMER.MSG.NEXT.ingestion-raw.ingestion-raw-connector" took too long: 2.296400356s
nats-1 nats [7] 2023/09/03 13:54:07.610321 [WRN] 10.60.5.156:33246 - cid:78 - Readloop processing time: 3.776439638s
```
The problem seems related to this stream
```
Information for Stream ingestion-incidents created 2023-04-24 09:13:49
Description: Incident coming from ingestion
Subjects: ingestion.incidents.*.*
Replicas: 3
Storage: File
Options:
Retention: Limits
Acknowledgements: true
Discard Policy: Old
Duplicate Window: 2m0s
Allows Msg Delete: true
Allows Purge: true
Allows Rollups: false
Limits:
Maximum Messages: unlimited
Maximum Per Subject: unlimited
Maximum Bytes: unlimited
Maximum Age: 1d0h0m0s
Maximum Message Size: unlimited
Maximum Consumers: unlimited
Cluster Information:
Name: nats
Leader: nats-0
Replica: nats-1, current, seen 1.29s ago
Replica: nats-2, current, seen 0.36s ago
State:
Messages: 9,952
Bytes: 776 MiB
FirstSeq: 1,629,389 @ 2023-09-02T14:00:24 UTC
LastSeq: 1,639,340 @ 2023-09-03T13:51:28 UTC
Active Consumers: 1
Number of Subjects: 42
```
And this consumer
```
Information for Consumer ingestion-incidents > ingestion-ingestor created 2023-04-24T13:57:43Z
Configuration:
Name: ingestion-ingestor
Pull Mode: true
Deliver Policy: All
Ack Policy: Explicit
Ack Wait: 30s
Replay Policy: Instant
Maximum Deliveries: 10
Max Ack Pending: 1,000
Max Waiting Pulls: 512
Cluster Information:
Name: nats
Leader: nats-0
Replica: nats-1, current, seen 0.54s ago
Replica: nats-2, current, seen 0.55s ago
State:
Last Delivered Message: Consumer sequence: 1,639,697 Stream sequence: 1,639,340 Last delivery: 8m21s ago
Acknowledgment floor: Consumer sequence: 1,639,697 Stream sequence: 1,639,340 Last Ack: 8m21s ago
Outstanding Acks: 0 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 0
Waiting Pulls: 1 of maximum 512
```
Moreover, the consumer client is currently often getting the `no heartbeat received` error, not sure if this is a consequence of the leader change or the cause.
### Given the capability you are leveraging, describe your expectation?
I expect the overall system to be more robust, leader election should not happen very freqeuntly while the system is running.
### Given the expectation, what is the defect you are observing?
Leader election is triggered too frequently, causing the system to be not very reliable.
|
defect
|
frequent leader election what version were you using nats deployed with helm chart what environment was the server running in nodes cluster on kubenetes on gke is this defect reproducible currently i m experiencing this issue in a nats cluster but i was not able to reproduce it in a controlled way the nats cluster is subject to very frequent leader election on multiple stream consumers this is currenly happening about every minuts and seems to be releated to messages published on a specific stream here is some logs of our cluster nats nats cid readloop processing time nats nats jetstream cluster new metadata leader nats nats nats nats jetstream cluster new consumer leader for g ingestion incidents ingestion ingestor nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new metadata leader nats nats nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats cid readloop processing time nats nats jetstream cluster new stream leader for g ingestion incidents nats nats jetstream cluster new stream leader for g ingestor new incidents nats nats jetstream cluster new stream leader for g simple ingestion nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats cid readloop processing time nats nats rid readloop processing time nats nats healthcheck failed jetstream has not established contact with a meta leader nats nats jetstream cluster new consumer leader for g ingestion incidents ingestion ingestor nats nats self is new jetstream cluster metadata leader nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new metadata leader nats nats nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats raft falling behind in health check commit applied nats nats healthcheck failed jetstream is not current with the meta leader nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new stream leader for g ingestor new incidents nats nats cid readloop processing time nats nats internal subscription on js api consumer msg next ingestion raw ingestion raw connector took too long nats nats cid readloop processing time nats nats cid readloop processing time nats nats internal subscription on js api consumer msg next ingestion raw ingestion raw connector took too long nats nats internal subscription on js api consumer msg next ingestion raw ingestion raw connector took too long nats nats cid readloop processing time nats nats cid readloop processing time nats nats cid readloop processing time nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts telegram telegram sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g alerts digest digests sender nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g ingestion raw ingestion raw connector nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dns fuzz notification nats nats jetstream cluster new consumer leader for g domains fuzz dnsfuzz sub ingestion nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events telegram alerts tickets created nats nats jetstream cluster new consumer leader for g events otrs connector events nats nats jetstream cluster new consumer leader for g events webhooks creator nats nats jetstream cluster new consumer leader for g events alerts reader for ticket closed nats nats internal subscription on js api consumer msg next ingestion raw ingestion raw connector took too long nats nats cid readloop processing time the problem seems related to this stream information for stream ingestion incidents created description incident coming from ingestion subjects ingestion incidents replicas storage file options retention limits acknowledgements true discard policy old duplicate window allows msg delete true allows purge true allows rollups false limits maximum messages unlimited maximum per subject unlimited maximum bytes unlimited maximum age maximum message size unlimited maximum consumers unlimited cluster information name nats leader nats replica nats current seen ago replica nats current seen ago state messages bytes mib firstseq utc lastseq utc active consumers number of subjects and this consumer information for consumer ingestion incidents ingestion ingestor created configuration name ingestion ingestor pull mode true deliver policy all ack policy explicit ack wait replay policy instant maximum deliveries max ack pending max waiting pulls cluster information name nats leader nats replica nats current seen ago replica nats current seen ago state last delivered message consumer sequence stream sequence last delivery ago acknowledgment floor consumer sequence stream sequence last ack ago outstanding acks out of maximum redelivered messages unprocessed messages waiting pulls of maximum moreover the consumer client is currently often getting the no heartbeat received error not sure if this is a consequence of the leader change or the cause given the capability you are leveraging describe your expectation i expect the overall system to be more robust leader election should not happen very freqeuntly while the system is running given the expectation what is the defect you are observing leader election is triggered too frequently causing the system to be not very reliable
| 1
|
111,298
| 14,049,061,765
|
IssuesEvent
|
2020-11-02 09:45:51
|
ajency/Dhanda-App
|
https://api.github.com/repos/ajency/Dhanda-App
|
opened
|
Traverse back thro app not provided for pop ups.
|
Design Issue
|
There is no option provided to traverse back in App other then using mobile back button.

|
1.0
|
Traverse back thro app not provided for pop ups. - There is no option provided to traverse back in App other then using mobile back button.

|
non_defect
|
traverse back thro app not provided for pop ups there is no option provided to traverse back in app other then using mobile back button
| 0
|
36,133
| 7,867,145,201
|
IssuesEvent
|
2018-06-23 04:14:37
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Latin hypercube: Troubleshoot convergence of RICO with KK microphysics (Trac #506)
|
Migrated from Trac clubb_src defect dschanen@uwm.edu
|
Dave recently discovered that Latin hypercube doesn't converge when running RICO using KK microphysics (https://github.com/larson-group/trac_test/issues/498).
We should try to find the bug. It used to work.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/506
```json
{
"status": "closed",
"changetime": "2012-07-03T19:19:21",
"description": "\nDave recently discovered that Latin hypercube doesn't converge when running RICO using KK microphysics (comment:27:ticket:498). \n\nWe should try to find the bug. It used to work.",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "fixed",
"_ts": "1341343161761514",
"component": "clubb_src",
"summary": "Latin hypercube: Troubleshoot convergence of RICO with KK microphysics",
"priority": "critical",
"keywords": "",
"time": "2012-04-19T21:53:29",
"milestone": "Improve SILHS",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
|
1.0
|
Latin hypercube: Troubleshoot convergence of RICO with KK microphysics (Trac #506) -
Dave recently discovered that Latin hypercube doesn't converge when running RICO using KK microphysics (https://github.com/larson-group/trac_test/issues/498).
We should try to find the bug. It used to work.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/506
```json
{
"status": "closed",
"changetime": "2012-07-03T19:19:21",
"description": "\nDave recently discovered that Latin hypercube doesn't converge when running RICO using KK microphysics (comment:27:ticket:498). \n\nWe should try to find the bug. It used to work.",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "fixed",
"_ts": "1341343161761514",
"component": "clubb_src",
"summary": "Latin hypercube: Troubleshoot convergence of RICO with KK microphysics",
"priority": "critical",
"keywords": "",
"time": "2012-04-19T21:53:29",
"milestone": "Improve SILHS",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
|
defect
|
latin hypercube troubleshoot convergence of rico with kk microphysics trac dave recently discovered that latin hypercube doesn t converge when running rico using kk microphysics we should try to find the bug it used to work attachments migrated from json status closed changetime description ndave recently discovered that latin hypercube doesn t converge when running rico using kk microphysics comment ticket n nwe should try to find the bug it used to work reporter vlarson uwm edu cc vlarson uwm edu resolution fixed ts component clubb src summary latin hypercube troubleshoot convergence of rico with kk microphysics priority critical keywords time milestone improve silhs owner dschanen uwm edu type defect
| 1
|
4,043
| 2,610,086,472
|
IssuesEvent
|
2015-02-26 18:26:16
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳除青春痘的价格
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳除青春痘的价格【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:12
|
1.0
|
深圳除青春痘的价格 - ```
深圳除青春痘的价格【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:12
|
defect
|
深圳除青春痘的价格 深圳除青春痘的价格【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at
| 1
|
23,951
| 3,874,845,943
|
IssuesEvent
|
2016-04-11 21:58:19
|
ariya/phantomjs
|
https://api.github.com/repos/ariya/phantomjs
|
closed
|
Unable to compile phantomjs 1.5+ on armhf build
|
old.Priority-Medium old.Status-New old.Type-Defect
|
_**[brendan....@gmail.com](http://code.google.com/u/106680562535541964315/) commented:**_
> <b>Which version of PhantomJS are you using? Tip: run 'phantomjs --version'.</b>
trying to compile 1.8,1.7,1.6 or 1.5
>
> <b>What steps will reproduce the problem?</b>
> 1.sudo apt-get install build-essential chrpath git-core libssl-dev libfontconfig1-dev
> 2. git clone git://github.com/ariya/phantomjs.git
> 3. cd phantomjs
> 4. git checkout 1.8
> 5. ./build.sh
>
> <b>What is the expected output? What do you see instead?</b>
1.8,1.7, and 1.6 all return immediately to the command prompt with no error. 1.5 attempts to compile. Output attached.
>
> From what I can see the errors that might mean something are:
> * WARNING: /usr/local/src/phantomjs/src/qt/src/gui/gui.pro:44: Unable to find file for inclusion egl/egl.pri
> * WARNING: Failure to find: ../3rdparty/pixman/pixman-arm-neon-asm.S
> * collect2: ld returned 1 exit status
> * make[1]: *** [../bin/phantomjs] Error 1
> * make[1]: Leaving directory `/usr/local/src/phantomjs/src'
> * make: *** [sub-src-phantomjs-pro-make_default-ordered] Error 2
>
> <b>Which operating system are you using?</b>
Bodhilinux armhf on an efikamx
>
> <b>Did you use binary PhantomJS or did you compile it from source?</b>
Trying to compile.
>
> <b>Please provide any additional information below.</b>
Sorry this is my first experience with attempting to compile from source and I'm pretty new to linux fullstop. I've been using phantomjs on a windows dev box I have but trying to set up phantomjs to work on my linux webserver.
>
> Any ideas with the errors? Or why I don't even seem to be able to initiate the script for 1.6-1.8?
>
> Thanks,
> Brendan
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #1064](http://code.google.com/p/phantomjs/issues/detail?id=1064).
:star2: **2** people had starred this issue at the time of migration.
|
1.0
|
Unable to compile phantomjs 1.5+ on armhf build - _**[brendan....@gmail.com](http://code.google.com/u/106680562535541964315/) commented:**_
> <b>Which version of PhantomJS are you using? Tip: run 'phantomjs --version'.</b>
trying to compile 1.8,1.7,1.6 or 1.5
>
> <b>What steps will reproduce the problem?</b>
> 1.sudo apt-get install build-essential chrpath git-core libssl-dev libfontconfig1-dev
> 2. git clone git://github.com/ariya/phantomjs.git
> 3. cd phantomjs
> 4. git checkout 1.8
> 5. ./build.sh
>
> <b>What is the expected output? What do you see instead?</b>
1.8,1.7, and 1.6 all return immediately to the command prompt with no error. 1.5 attempts to compile. Output attached.
>
> From what I can see the errors that might mean something are:
> * WARNING: /usr/local/src/phantomjs/src/qt/src/gui/gui.pro:44: Unable to find file for inclusion egl/egl.pri
> * WARNING: Failure to find: ../3rdparty/pixman/pixman-arm-neon-asm.S
> * collect2: ld returned 1 exit status
> * make[1]: *** [../bin/phantomjs] Error 1
> * make[1]: Leaving directory `/usr/local/src/phantomjs/src'
> * make: *** [sub-src-phantomjs-pro-make_default-ordered] Error 2
>
> <b>Which operating system are you using?</b>
Bodhilinux armhf on an efikamx
>
> <b>Did you use binary PhantomJS or did you compile it from source?</b>
Trying to compile.
>
> <b>Please provide any additional information below.</b>
Sorry this is my first experience with attempting to compile from source and I'm pretty new to linux fullstop. I've been using phantomjs on a windows dev box I have but trying to set up phantomjs to work on my linux webserver.
>
> Any ideas with the errors? Or why I don't even seem to be able to initiate the script for 1.6-1.8?
>
> Thanks,
> Brendan
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #1064](http://code.google.com/p/phantomjs/issues/detail?id=1064).
:star2: **2** people had starred this issue at the time of migration.
|
defect
|
unable to compile phantomjs on armhf build commented which version of phantomjs are you using tip run phantomjs version trying to compile or what steps will reproduce the problem sudo apt get install build essential chrpath git core libssl dev dev git clone git github com ariya phantomjs git cd phantomjs git checkout build sh what is the expected output what do you see instead and all return immediately to the command prompt with no error attempts to compile output attached from what i can see the errors that might mean something are warning usr local src phantomjs src qt src gui gui pro unable to find file for inclusion egl egl pri warning failure to find pixman pixman arm neon asm s ld returned exit status make error make leaving directory usr local src phantomjs src make error which operating system are you using bodhilinux armhf on an efikamx did you use binary phantomjs or did you compile it from source trying to compile please provide any additional information below sorry this is my first experience with attempting to compile from source and i m pretty new to linux fullstop i ve been using phantomjs on a windows dev box i have but trying to set up phantomjs to work on my linux webserver any ideas with the errors or why i don t even seem to be able to initiate the script for thanks brendan disclaimer this issue was migrated on from the project s former issue tracker on google code nbsp people had starred this issue at the time of migration
| 1
|
60,796
| 17,023,524,815
|
IssuesEvent
|
2021-07-03 02:28:17
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
When searching for a city, it zooms in too far
|
Component: nominatim Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 4.35pm, Friday, 11th December 2009]**
I did a search on nominatim for 'Leeds', the city in the UK, and it found it. However the map zoomed in to the centre of the city, at the highest zoom level, right where the node is I assume. At first I wasn't sure if it had found the city, and was showing me "Leeds Street". I think when you search for a place=city, it should not zoom into the highest zoom level, it should zoom out a bit.
|
1.0
|
When searching for a city, it zooms in too far - **[Submitted to the original trac issue database at 4.35pm, Friday, 11th December 2009]**
I did a search on nominatim for 'Leeds', the city in the UK, and it found it. However the map zoomed in to the centre of the city, at the highest zoom level, right where the node is I assume. At first I wasn't sure if it had found the city, and was showing me "Leeds Street". I think when you search for a place=city, it should not zoom into the highest zoom level, it should zoom out a bit.
|
defect
|
when searching for a city it zooms in too far i did a search on nominatim for leeds the city in the uk and it found it however the map zoomed in to the centre of the city at the highest zoom level right where the node is i assume at first i wasn t sure if it had found the city and was showing me leeds street i think when you search for a place city it should not zoom into the highest zoom level it should zoom out a bit
| 1
|
679,831
| 23,247,110,379
|
IssuesEvent
|
2022-08-03 21:25:55
|
pokt-network/pocket
|
https://api.github.com/repos/pokt-network/pocket
|
closed
|
[Infra] Configuration Loader
|
priority:low infra
|
At present we are loading configurations from files exclusively. At some point we'll start adding flags to override specific configuration values.
A configuration loader should be written to make it so that flags are auto-generated in very much the same way configuration files can be automatically loaded by matching the Go structure to file contents.
This is low-burning at the moment, since configuration is in flux, but is part of the wiring milestone.
|
1.0
|
[Infra] Configuration Loader - At present we are loading configurations from files exclusively. At some point we'll start adding flags to override specific configuration values.
A configuration loader should be written to make it so that flags are auto-generated in very much the same way configuration files can be automatically loaded by matching the Go structure to file contents.
This is low-burning at the moment, since configuration is in flux, but is part of the wiring milestone.
|
non_defect
|
configuration loader at present we are loading configurations from files exclusively at some point we ll start adding flags to override specific configuration values a configuration loader should be written to make it so that flags are auto generated in very much the same way configuration files can be automatically loaded by matching the go structure to file contents this is low burning at the moment since configuration is in flux but is part of the wiring milestone
| 0
|
53,002
| 13,260,069,302
|
IssuesEvent
|
2020-08-20 17:38:33
|
jkoan/test-navit
|
https://api.github.com/repos/jkoan/test-navit
|
closed
|
Searching for street names on OSM maps doesn't work (Trac #41)
|
Incomplete Migration Migrated from Trac core cp15 defect/bug
|
Migrated from http://trac.navit-project.org/ticket/41
```json
{
"status": "closed",
"changetime": "2008-08-20T14:28:59",
"_ts": "1219242539000000",
"description": "On OSM maps I have to find the street on the map myself, in order to set the destination. It'd nice to be able to type it in instead, and have navit find it for me.\n\nThere is of course the problem that the same street name exist in many places, and OSM don't usually have any information about postcodes or regions. Still, the names could be shown in a list, along with the name of the city/town/village/suburb closest to the road geographically. People usually know if the look for a road in/near Trondheim or Oslo.\n\nWhen there are several roads with the same name in the same city, chances are that it is one road that forks or is divided due to traffic restrictions. Consider zooming so all segments are visible, and highlight them so the user can choose a destination.\n\nOSM does not currently have house numbers. So the best we can do is to lead the driver to the street, and then consider the mission accomplished.\n\n",
"reporter": "hafting",
"cc": "",
"resolution": "invalid",
"time": "2007-12-13T15:01:11",
"component": "core",
"summary": "Searching for street names on OSM maps doesn't work",
"priority": "major",
"keywords": "",
"version": "",
"milestone": "version 0.1.0",
"owner": "cp15",
"type": "defect/bug",
"severity": ""
}
```
|
1.0
|
Searching for street names on OSM maps doesn't work (Trac #41) - Migrated from http://trac.navit-project.org/ticket/41
```json
{
"status": "closed",
"changetime": "2008-08-20T14:28:59",
"_ts": "1219242539000000",
"description": "On OSM maps I have to find the street on the map myself, in order to set the destination. It'd nice to be able to type it in instead, and have navit find it for me.\n\nThere is of course the problem that the same street name exist in many places, and OSM don't usually have any information about postcodes or regions. Still, the names could be shown in a list, along with the name of the city/town/village/suburb closest to the road geographically. People usually know if the look for a road in/near Trondheim or Oslo.\n\nWhen there are several roads with the same name in the same city, chances are that it is one road that forks or is divided due to traffic restrictions. Consider zooming so all segments are visible, and highlight them so the user can choose a destination.\n\nOSM does not currently have house numbers. So the best we can do is to lead the driver to the street, and then consider the mission accomplished.\n\n",
"reporter": "hafting",
"cc": "",
"resolution": "invalid",
"time": "2007-12-13T15:01:11",
"component": "core",
"summary": "Searching for street names on OSM maps doesn't work",
"priority": "major",
"keywords": "",
"version": "",
"milestone": "version 0.1.0",
"owner": "cp15",
"type": "defect/bug",
"severity": ""
}
```
|
defect
|
searching for street names on osm maps doesn t work trac migrated from json status closed changetime ts description on osm maps i have to find the street on the map myself in order to set the destination it d nice to be able to type it in instead and have navit find it for me n nthere is of course the problem that the same street name exist in many places and osm don t usually have any information about postcodes or regions still the names could be shown in a list along with the name of the city town village suburb closest to the road geographically people usually know if the look for a road in near trondheim or oslo n nwhen there are several roads with the same name in the same city chances are that it is one road that forks or is divided due to traffic restrictions consider zooming so all segments are visible and highlight them so the user can choose a destination n nosm does not currently have house numbers so the best we can do is to lead the driver to the street and then consider the mission accomplished n n reporter hafting cc resolution invalid time component core summary searching for street names on osm maps doesn t work priority major keywords version milestone version owner type defect bug severity
| 1
|
102,755
| 16,583,868,669
|
IssuesEvent
|
2021-05-31 15:28:57
|
ilan-WS/m3
|
https://api.github.com/repos/ilan-WS/m3
|
opened
|
CVE-2020-7788 (High) detected in ini-1.3.5.tgz
|
security vulnerability
|
## CVE-2020-7788 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ini-1.3.5.tgz</b></p></summary>
<p>An ini encoder/decoder for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/ini/-/ini-1.3.5.tgz">https://registry.npmjs.org/ini/-/ini-1.3.5.tgz</a></p>
<p>Path to dependency file: m3/src/ctl/ui/package.json</p>
<p>Path to vulnerable library: m3/src/ctl/ui/node_modules/ini</p>
<p>
Dependency Hierarchy:
- uber-licence-3.1.1.tgz (Root Library)
- update-notifier-1.0.3.tgz
- latest-version-2.0.0.tgz
- package-json-2.4.0.tgz
- registry-auth-token-3.4.0.tgz
- rc-1.2.8.tgz
- :x: **ini-1.3.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ilan-WS/m3/commit/a62d2ead44380e2c1668bbbf026d5385b98d56ec">a62d2ead44380e2c1668bbbf026d5385b98d56ec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package ini before 1.3.6. If an attacker submits a malicious INI file to an application that parses it with ini.parse, they will pollute the prototype on the application. This can be exploited further depending on the context.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7788>CVE-2020-7788</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: v1.3.6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ini","packageVersion":"1.3.5","packageFilePaths":["/src/ctl/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"uber-licence:3.1.1;update-notifier:1.0.3;latest-version:2.0.0;package-json:2.4.0;registry-auth-token:3.4.0;rc:1.2.8;ini:1.3.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.3.6"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7788","vulnerabilityDetails":"This affects the package ini before 1.3.6. If an attacker submits a malicious INI file to an application that parses it with ini.parse, they will pollute the prototype on the application. This can be exploited further depending on the context.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7788","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-7788 (High) detected in ini-1.3.5.tgz - ## CVE-2020-7788 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ini-1.3.5.tgz</b></p></summary>
<p>An ini encoder/decoder for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/ini/-/ini-1.3.5.tgz">https://registry.npmjs.org/ini/-/ini-1.3.5.tgz</a></p>
<p>Path to dependency file: m3/src/ctl/ui/package.json</p>
<p>Path to vulnerable library: m3/src/ctl/ui/node_modules/ini</p>
<p>
Dependency Hierarchy:
- uber-licence-3.1.1.tgz (Root Library)
- update-notifier-1.0.3.tgz
- latest-version-2.0.0.tgz
- package-json-2.4.0.tgz
- registry-auth-token-3.4.0.tgz
- rc-1.2.8.tgz
- :x: **ini-1.3.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ilan-WS/m3/commit/a62d2ead44380e2c1668bbbf026d5385b98d56ec">a62d2ead44380e2c1668bbbf026d5385b98d56ec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package ini before 1.3.6. If an attacker submits a malicious INI file to an application that parses it with ini.parse, they will pollute the prototype on the application. This can be exploited further depending on the context.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7788>CVE-2020-7788</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: v1.3.6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ini","packageVersion":"1.3.5","packageFilePaths":["/src/ctl/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"uber-licence:3.1.1;update-notifier:1.0.3;latest-version:2.0.0;package-json:2.4.0;registry-auth-token:3.4.0;rc:1.2.8;ini:1.3.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.3.6"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7788","vulnerabilityDetails":"This affects the package ini before 1.3.6. If an attacker submits a malicious INI file to an application that parses it with ini.parse, they will pollute the prototype on the application. This can be exploited further depending on the context.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7788","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve high detected in ini tgz cve high severity vulnerability vulnerable library ini tgz an ini encoder decoder for node library home page a href path to dependency file src ctl ui package json path to vulnerable library src ctl ui node modules ini dependency hierarchy uber licence tgz root library update notifier tgz latest version tgz package json tgz registry auth token tgz rc tgz x ini tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package ini before if an attacker submits a malicious ini file to an application that parses it with ini parse they will pollute the prototype on the application this can be exploited further depending on the context publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree uber licence update notifier latest version package json registry auth token rc ini isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects the package ini before if an attacker submits a malicious ini file to an application that parses it with ini parse they will pollute the prototype on the application this can be exploited further depending on the context vulnerabilityurl
| 0
|
41,453
| 10,470,972,169
|
IssuesEvent
|
2019-09-23 06:23:34
|
melink14/rikaikun
|
https://api.github.com/repos/melink14/rikaikun
|
closed
|
Type Error, when click on tab after not clicking on it
|
Priority-High Type-Defect auto-migrated
|
```
No actually present error but here's the output:
arguments: Array
message: "Cannot read property 'rikaichan' of undefined"
stack: "TypeError: Cannot read property 'rikaichan' of undefined at
Object.enableTab (chrome-extension:/…"
type: "non_object_property_load"
This can be found on wwwjdic.
```
Original issue reported on code.google.com by `melin...@gmail.com` on 12 Jan 2010 at 3:52
|
1.0
|
Type Error, when click on tab after not clicking on it - ```
No actually present error but here's the output:
arguments: Array
message: "Cannot read property 'rikaichan' of undefined"
stack: "TypeError: Cannot read property 'rikaichan' of undefined at
Object.enableTab (chrome-extension:/…"
type: "non_object_property_load"
This can be found on wwwjdic.
```
Original issue reported on code.google.com by `melin...@gmail.com` on 12 Jan 2010 at 3:52
|
defect
|
type error when click on tab after not clicking on it no actually present error but here s the output arguments array message cannot read property rikaichan of undefined stack typeerror cannot read property rikaichan of undefined at object enabletab chrome extension … type non object property load this can be found on wwwjdic original issue reported on code google com by melin gmail com on jan at
| 1
|
28,703
| 5,345,005,518
|
IssuesEvent
|
2017-02-17 15:55:41
|
chandanbansal/gmail-backup
|
https://api.github.com/repos/chandanbansal/gmail-backup
|
closed
|
Hide password in command line
|
auto-migrated Priority-Medium Type-Defect
|
```
Hi,
it is more an enhancement request than a bug:
When gmail-backup is launched thru command line, the password can be read in
clear text within the process list. Not very secure.
Can you add a mecanism where password can be read from a file, or modify the
launch process to hide the password (like mysqldump by example)?
Thanks.
```
Original issue reported on code.google.com by `vallo...@googlemail.com` on 26 Jul 2011 at 12:41
|
1.0
|
Hide password in command line - ```
Hi,
it is more an enhancement request than a bug:
When gmail-backup is launched thru command line, the password can be read in
clear text within the process list. Not very secure.
Can you add a mecanism where password can be read from a file, or modify the
launch process to hide the password (like mysqldump by example)?
Thanks.
```
Original issue reported on code.google.com by `vallo...@googlemail.com` on 26 Jul 2011 at 12:41
|
defect
|
hide password in command line hi it is more an enhancement request than a bug when gmail backup is launched thru command line the password can be read in clear text within the process list not very secure can you add a mecanism where password can be read from a file or modify the launch process to hide the password like mysqldump by example thanks original issue reported on code google com by vallo googlemail com on jul at
| 1
|
75,227
| 25,599,802,806
|
IssuesEvent
|
2022-12-01 19:07:53
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Element-Desktop: jitsi frame blocked by CSP frame-ancestors 'self', but working on Web/Mobile
|
T-Defect
|
### Steps to reproduce
1. Where are you starting? What can you see?
2. What do you click?
3. More steps…
Hello!
My self-hosted jitsi-meet and matrix-synapse instance using [matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy) has been running for a few months now without any issues.
The Element-Desktop client app is requesting the Jitsi Widget frame through *VECTOR-IM's* own domain *https://app.element.io* and is being blocked by the *Content Security Policy* directive: **"frame-ancestors 'self' \*.example.com"**. Whereas Element-Web and also Element for mobile both are using the whitelisted/allowed subdomain *https://element.example.com*.
##### Element-Desktop:
https://**app.element.io**/jitsi.html#**conferenceDomain=jitsi.example.com**&conferenceId=Jitsi[...]
##### Element-Web:
https://**element.example.com**/jitsi.html#**conferenceDomain=jitsi.example.com**&conferenceId=Jitsi[...]
This is what the relevand log returns after tapping on *Joyn conference*.
Dev-tools console log (shortened and :
```
Refused to frame 'https://jitsi.example.com/' because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'self' *.example.com".
```
This issue is only reproducible on element-desktop.
### Outcome
#### What did you expect?
Joyn conference
#### What happened instead?
### Operating system
Debian 10 (buster), x86_64
### Application version
Element-Desktop: 1.11.15
### How did you install the app?
```bashapt-get install element-desktop
### Homeserver
Synapse v1.71.0
### Will you send logs?
Yes
|
1.0
|
Element-Desktop: jitsi frame blocked by CSP frame-ancestors 'self', but working on Web/Mobile - ### Steps to reproduce
1. Where are you starting? What can you see?
2. What do you click?
3. More steps…
Hello!
My self-hosted jitsi-meet and matrix-synapse instance using [matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy) has been running for a few months now without any issues.
The Element-Desktop client app is requesting the Jitsi Widget frame through *VECTOR-IM's* own domain *https://app.element.io* and is being blocked by the *Content Security Policy* directive: **"frame-ancestors 'self' \*.example.com"**. Whereas Element-Web and also Element for mobile both are using the whitelisted/allowed subdomain *https://element.example.com*.
##### Element-Desktop:
https://**app.element.io**/jitsi.html#**conferenceDomain=jitsi.example.com**&conferenceId=Jitsi[...]
##### Element-Web:
https://**element.example.com**/jitsi.html#**conferenceDomain=jitsi.example.com**&conferenceId=Jitsi[...]
This is what the relevand log returns after tapping on *Joyn conference*.
Dev-tools console log (shortened and :
```
Refused to frame 'https://jitsi.example.com/' because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'self' *.example.com".
```
This issue is only reproducible on element-desktop.
### Outcome
#### What did you expect?
Joyn conference
#### What happened instead?
### Operating system
Debian 10 (buster), x86_64
### Application version
Element-Desktop: 1.11.15
### How did you install the app?
```bashapt-get install element-desktop
### Homeserver
Synapse v1.71.0
### Will you send logs?
Yes
|
defect
|
element desktop jitsi frame blocked by csp frame ancestors self but working on web mobile steps to reproduce where are you starting what can you see what do you click more steps… hello my self hosted jitsi meet and matrix synapse instance using has been running for a few months now without any issues the element desktop client app is requesting the jitsi widget frame through vector im s own domain and is being blocked by the content security policy directive frame ancestors self example com whereas element web and also element for mobile both are using the whitelisted allowed subdomain element desktop element web this is what the relevand log returns after tapping on joyn conference dev tools console log shortened and refused to frame because an ancestor violates the following content security policy directive frame ancestors self example com this issue is only reproducible on element desktop outcome what did you expect joyn conference what happened instead operating system debian buster application version element desktop how did you install the app bashapt get install element desktop homeserver synapse will you send logs yes
| 1
|
215,739
| 24,196,497,030
|
IssuesEvent
|
2022-09-24 01:07:17
|
DavidSpek/pipelines
|
https://api.github.com/repos/DavidSpek/pipelines
|
opened
|
CVE-2022-35973 (High) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl
|
security vulnerability
|
## CVE-2022-35973 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: /contrib/components/openvino/ovms-deployer/containers/requirements.txt,/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DavidSpek/pipelines/commit/6f7433f006e282c4f25441e7502b80d73751e38f">6f7433f006e282c4f25441e7502b80d73751e38f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. If `QuantizedMatMul` is given nonscalar input for: `min_a`, `max_a`, `min_b`, or `max_b` It gives a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit aca766ac7693bf29ed0df55ad6bfcc78f35e7f48. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35973>CVE-2022-35973</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-689c-r7h2-fv9v">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-689c-r7h2-fv9v</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-35973 (High) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2022-35973 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: /contrib/components/openvino/ovms-deployer/containers/requirements.txt,/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DavidSpek/pipelines/commit/6f7433f006e282c4f25441e7502b80d73751e38f">6f7433f006e282c4f25441e7502b80d73751e38f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. If `QuantizedMatMul` is given nonscalar input for: `min_a`, `max_a`, `min_b`, or `max_b` It gives a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit aca766ac7693bf29ed0df55ad6bfcc78f35e7f48. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-35973>CVE-2022-35973</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-689c-r7h2-fv9v">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-689c-r7h2-fv9v</a></p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: tensorflow - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-cpu - 2.7.2,2.8.1,2.9.1,2.10.0, tensorflow-gpu - 2.7.2,2.8.1,2.9.1,2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file contrib components openvino ovms deployer containers requirements txt path to vulnerable library contrib components openvino ovms deployer containers requirements txt samples core ai platform training dependency hierarchy x tensorflow whl vulnerable library found in head commit a href found in base branch master vulnerability details tensorflow is an open source platform for machine learning if quantizedmatmul is given nonscalar input for min a max a min b or max b it gives a segfault that can be used to trigger a denial of service attack we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range there are no known workarounds for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend
| 0
|
11,570
| 14,441,672,343
|
IssuesEvent
|
2020-12-07 17:05:32
|
frontendbr/forum
|
https://api.github.com/repos/frontendbr/forum
|
closed
|
Documentação e padrões de componentes / design
|
Processos [Discussão]
|
Pessoal, gostaria de saber como e onde vocês fazem uma documentação de tudo referente ao design de componentes e explicações de quando usar cada elemento dentro do produto em que vocês atuam.
Estou com um problema hoje onde trabalho, pois existe uma página na internet onde existem alguns componentes que o sistema usa, porém está muito desatualizado. Uma designer fez um documento em PDF onde falava sobre os componentes, adicionando alguns novos também, mas não sei se documento escrito assim seria o ideal.
Outra coisa também que gera bastante dor de cabeça é desenvolver componentes "fora do padrão". Pois como os documentos são antigos, quando desenvolvemos um módulo novo como estamos fazendo agora, em alguns casos específicos, o comportamento das novas telas não é igual ao de telas de módulos antigos já do sistema, o que acaba gerando retornos por estar "fora do padrão".
Se puderem me indicar algumas soluções que vocês utilizam para fazer essas documentações de estilo e componentes de forma fácil... Desde já agradeço!
|
1.0
|
Documentação e padrões de componentes / design - Pessoal, gostaria de saber como e onde vocês fazem uma documentação de tudo referente ao design de componentes e explicações de quando usar cada elemento dentro do produto em que vocês atuam.
Estou com um problema hoje onde trabalho, pois existe uma página na internet onde existem alguns componentes que o sistema usa, porém está muito desatualizado. Uma designer fez um documento em PDF onde falava sobre os componentes, adicionando alguns novos também, mas não sei se documento escrito assim seria o ideal.
Outra coisa também que gera bastante dor de cabeça é desenvolver componentes "fora do padrão". Pois como os documentos são antigos, quando desenvolvemos um módulo novo como estamos fazendo agora, em alguns casos específicos, o comportamento das novas telas não é igual ao de telas de módulos antigos já do sistema, o que acaba gerando retornos por estar "fora do padrão".
Se puderem me indicar algumas soluções que vocês utilizam para fazer essas documentações de estilo e componentes de forma fácil... Desde já agradeço!
|
non_defect
|
documentação e padrões de componentes design pessoal gostaria de saber como e onde vocês fazem uma documentação de tudo referente ao design de componentes e explicações de quando usar cada elemento dentro do produto em que vocês atuam estou com um problema hoje onde trabalho pois existe uma página na internet onde existem alguns componentes que o sistema usa porém está muito desatualizado uma designer fez um documento em pdf onde falava sobre os componentes adicionando alguns novos também mas não sei se documento escrito assim seria o ideal outra coisa também que gera bastante dor de cabeça é desenvolver componentes fora do padrão pois como os documentos são antigos quando desenvolvemos um módulo novo como estamos fazendo agora em alguns casos específicos o comportamento das novas telas não é igual ao de telas de módulos antigos já do sistema o que acaba gerando retornos por estar fora do padrão se puderem me indicar algumas soluções que vocês utilizam para fazer essas documentações de estilo e componentes de forma fácil desde já agradeço
| 0
|
58,294
| 8,245,359,850
|
IssuesEvent
|
2018-09-11 09:28:14
|
CLARIAH/wp5_mediasuite
|
https://api.github.com/repos/CLARIAH/wp5_mediasuite
|
reopened
|
As a service provider, I need to add a "terms of service"/policy document to the Media Suite
|
Done & tested! Theme: Documentation/tool-tips
|
Add "terms of service" document. Users should agree explicitly on two things:
-Fairness of use of the data
-Being "tracked" in analytics system for improvement purposes
Explain conditions of use of the content and metadata offered by the Media Suite.
|
1.0
|
As a service provider, I need to add a "terms of service"/policy document to the Media Suite - Add "terms of service" document. Users should agree explicitly on two things:
-Fairness of use of the data
-Being "tracked" in analytics system for improvement purposes
Explain conditions of use of the content and metadata offered by the Media Suite.
|
non_defect
|
as a service provider i need to add a terms of service policy document to the media suite add terms of service document users should agree explicitly on two things fairness of use of the data being tracked in analytics system for improvement purposes explain conditions of use of the content and metadata offered by the media suite
| 0
|
176,739
| 14,595,316,463
|
IssuesEvent
|
2020-12-20 10:53:11
|
Fluhzar/WAV
|
https://api.github.com/repos/Fluhzar/WAV
|
closed
|
Complete 0.5
|
documentation
|
- [x] Update `Cargo.toml` version
- [x] Update `CHANGELOG.md`
- [x] Update documentation
- [x] Publish to crates.io
|
1.0
|
Complete 0.5 - - [x] Update `Cargo.toml` version
- [x] Update `CHANGELOG.md`
- [x] Update documentation
- [x] Publish to crates.io
|
non_defect
|
complete update cargo toml version update changelog md update documentation publish to crates io
| 0
|
70,462
| 23,178,275,104
|
IssuesEvent
|
2022-07-31 18:51:55
|
primefaces/primereact
|
https://api.github.com/repos/primefaces/primereact
|
opened
|
Dialog: Flickering
|
defect :bangbang: needs-triage
|
### Describe the bug
When seting then visible property to true (opening the dialog) for an instant it show the full dialog as if it was already open, then start the "pretty" opening animation.
I was able to overcome the issue by keeping the component hidden when there are no other classes that indicate it should be visible.
```css
.p-dialog:not([class*="p-dialog-"]){
display: none;
}
.p-sidebar:not([class*="p-sidebar-"]){
display: none;
}
```
The error can be reproduced in the official TS Source Demo, as side note i noticed the same behavior in the sidebar component.
### Reproducer
https://fzlu8k.csb.app/
### PrimeReact version
8.3.0
### React version
18.x
### Language
TypeScript
### Build / Runtime
Vite
### Browser(s)
Chrome 103, firefox 103
### Steps to reproduce the behavior
1. Add a dialog as in the official documentation, there are no specific steps to reproduce te error.
### Expected behavior
A clean transition from hide to visible without flashing the full dialog in between.
|
1.0
|
Dialog: Flickering - ### Describe the bug
When seting then visible property to true (opening the dialog) for an instant it show the full dialog as if it was already open, then start the "pretty" opening animation.
I was able to overcome the issue by keeping the component hidden when there are no other classes that indicate it should be visible.
```css
.p-dialog:not([class*="p-dialog-"]){
display: none;
}
.p-sidebar:not([class*="p-sidebar-"]){
display: none;
}
```
The error can be reproduced in the official TS Source Demo, as side note i noticed the same behavior in the sidebar component.
### Reproducer
https://fzlu8k.csb.app/
### PrimeReact version
8.3.0
### React version
18.x
### Language
TypeScript
### Build / Runtime
Vite
### Browser(s)
Chrome 103, firefox 103
### Steps to reproduce the behavior
1. Add a dialog as in the official documentation, there are no specific steps to reproduce te error.
### Expected behavior
A clean transition from hide to visible without flashing the full dialog in between.
|
defect
|
dialog flickering describe the bug when seting then visible property to true opening the dialog for an instant it show the full dialog as if it was already open then start the pretty opening animation i was able to overcome the issue by keeping the component hidden when there are no other classes that indicate it should be visible css p dialog not display none p sidebar not display none the error can be reproduced in the official ts source demo as side note i noticed the same behavior in the sidebar component reproducer primereact version react version x language typescript build runtime vite browser s chrome firefox steps to reproduce the behavior add a dialog as in the official documentation there are no specific steps to reproduce te error expected behavior a clean transition from hide to visible without flashing the full dialog in between
| 1
|
307,391
| 23,197,805,839
|
IssuesEvent
|
2022-08-01 18:10:20
|
MAAP-Project/maap-documentation-examples
|
https://api.github.com/repos/MAAP-Project/maap-documentation-examples
|
closed
|
Update "Running a GEDI Subsetting DPS Job" Code Example
|
documentation
|
```
import uuid
from maap.maap import MAAP
maap = MAAP(maap_host='api.ops.maap-project.org')
aoi = "<AOI GeoJSON URL>" # See previous section
limit = 2000 # Maximum number of granule files to download
result = maap.submitJob(
identifier="<DESCRIPTION>",
algo_id="gedi-subset_ubuntu",
version="<VERSION>",
queue="maap-dps-worker-8gb",
username="<USERNAME>", # Your Earthdata Login username
aoi=aoi,
columns="-",
query="-",
limit=limit,
)
print(result["job_id"])
```
Add the ```columns``` and ```query``` parameters in the code snippet located under [Running a GEDI Subsetting DPS Job](https://github.com/MAAP-Project/maap-documentation-examples/tree/main/gedi-subset#running-a-gedi-subsetting-dps-job) to reflect the changes to the [Algorithm Inputs](https://github.com/MAAP-Project/maap-documentation-examples/tree/main/gedi-subset#algorithm-inputs) section.
|
1.0
|
Update "Running a GEDI Subsetting DPS Job" Code Example - ```
import uuid
from maap.maap import MAAP
maap = MAAP(maap_host='api.ops.maap-project.org')
aoi = "<AOI GeoJSON URL>" # See previous section
limit = 2000 # Maximum number of granule files to download
result = maap.submitJob(
identifier="<DESCRIPTION>",
algo_id="gedi-subset_ubuntu",
version="<VERSION>",
queue="maap-dps-worker-8gb",
username="<USERNAME>", # Your Earthdata Login username
aoi=aoi,
columns="-",
query="-",
limit=limit,
)
print(result["job_id"])
```
Add the ```columns``` and ```query``` parameters in the code snippet located under [Running a GEDI Subsetting DPS Job](https://github.com/MAAP-Project/maap-documentation-examples/tree/main/gedi-subset#running-a-gedi-subsetting-dps-job) to reflect the changes to the [Algorithm Inputs](https://github.com/MAAP-Project/maap-documentation-examples/tree/main/gedi-subset#algorithm-inputs) section.
|
non_defect
|
update running a gedi subsetting dps job code example import uuid from maap maap import maap maap maap maap host api ops maap project org aoi see previous section limit maximum number of granule files to download result maap submitjob identifier algo id gedi subset ubuntu version queue maap dps worker username your earthdata login username aoi aoi columns query limit limit print result add the columns and query parameters in the code snippet located under to reflect the changes to the section
| 0
|
268,193
| 28,565,808,030
|
IssuesEvent
|
2023-04-21 01:56:57
|
turkdevops/node
|
https://api.github.com/repos/turkdevops/node
|
closed
|
WS-2022-0322 (High) detected in d3-color-1.2.3.tgz - autoclosed
|
Mend: dependency security vulnerability
|
## WS-2022-0322 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>d3-color-1.2.3.tgz</b></p></summary>
<p>Color spaces! RGB, HSL, Cubehelix, Lab and HCL (Lch).</p>
<p>Library home page: <a href="https://registry.npmjs.org/d3-color/-/d3-color-1.2.3.tgz">https://registry.npmjs.org/d3-color/-/d3-color-1.2.3.tgz</a></p>
<p>Path to dependency file: /deps/v8/tools/turbolizer/package.json</p>
<p>Path to vulnerable library: /deps/v8/tools/turbolizer/node_modules/d3-color/package.json</p>
<p>
Dependency Hierarchy:
- d3-5.7.0.tgz (Root Library)
- :x: **d3-color-1.2.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The d3-color module provides representations for various color spaces in the browser. Versions prior to 3.1.0 are vulnerable to a Regular expression Denial of Service. This issue has been patched in version 3.1.0. There are no known workarounds.
<p>Publish Date: 2022-09-29
<p>URL: <a href=https://github.com/d3/d3-color/commit/994d8fd95181484a5a27c5edc919aa625781432d>WS-2022-0322</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-36jr-mh4h-2g58">https://github.com/advisories/GHSA-36jr-mh4h-2g58</a></p>
<p>Release Date: 2022-09-29</p>
<p>Fix Resolution (d3-color): 3.1.0</p>
<p>Direct dependency fix Resolution (d3): 7.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2022-0322 (High) detected in d3-color-1.2.3.tgz - autoclosed - ## WS-2022-0322 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>d3-color-1.2.3.tgz</b></p></summary>
<p>Color spaces! RGB, HSL, Cubehelix, Lab and HCL (Lch).</p>
<p>Library home page: <a href="https://registry.npmjs.org/d3-color/-/d3-color-1.2.3.tgz">https://registry.npmjs.org/d3-color/-/d3-color-1.2.3.tgz</a></p>
<p>Path to dependency file: /deps/v8/tools/turbolizer/package.json</p>
<p>Path to vulnerable library: /deps/v8/tools/turbolizer/node_modules/d3-color/package.json</p>
<p>
Dependency Hierarchy:
- d3-5.7.0.tgz (Root Library)
- :x: **d3-color-1.2.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The d3-color module provides representations for various color spaces in the browser. Versions prior to 3.1.0 are vulnerable to a Regular expression Denial of Service. This issue has been patched in version 3.1.0. There are no known workarounds.
<p>Publish Date: 2022-09-29
<p>URL: <a href=https://github.com/d3/d3-color/commit/994d8fd95181484a5a27c5edc919aa625781432d>WS-2022-0322</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-36jr-mh4h-2g58">https://github.com/advisories/GHSA-36jr-mh4h-2g58</a></p>
<p>Release Date: 2022-09-29</p>
<p>Fix Resolution (d3-color): 3.1.0</p>
<p>Direct dependency fix Resolution (d3): 7.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in color tgz autoclosed ws high severity vulnerability vulnerable library color tgz color spaces rgb hsl cubehelix lab and hcl lch library home page a href path to dependency file deps tools turbolizer package json path to vulnerable library deps tools turbolizer node modules color package json dependency hierarchy tgz root library x color tgz vulnerable library found in base branch master vulnerability details the color module provides representations for various color spaces in the browser versions prior to are vulnerable to a regular expression denial of service this issue has been patched in version there are no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution color direct dependency fix resolution step up your open source security game with mend
| 0
|
135,084
| 5,241,850,587
|
IssuesEvent
|
2017-01-31 16:38:11
|
odalic/odalic-ui
|
https://api.github.com/repos/odalic/odalic-ui
|
closed
|
implement user management on the client side
|
in progress priority: High
|
- First do only some research on the topic. Expected solution should be based on token-based authentication. Check satellizer. https://grips.semantic-web.at/login.action?os_destination=%2Fpages%2Fviewpage.action%3FpageId%3D76226993&permissionViolation=true
Done:
- rest requests extended with token (automatically injected to $http via satellizer)
- if not logged and a privileged action occurs (e.g. retrieve files) - automatic redirection (via custom interceptor)
- sign up screen (functionality)
- login screen (functionality)
- logout button
- password change + confirmation token
- displaying login/signup buttons if not logged in, displaying logout button if logged in (extend json with "condition: "$scope.logged"" or something like that for each button, and test it)
Remains:
- testing
|
1.0
|
implement user management on the client side - - First do only some research on the topic. Expected solution should be based on token-based authentication. Check satellizer. https://grips.semantic-web.at/login.action?os_destination=%2Fpages%2Fviewpage.action%3FpageId%3D76226993&permissionViolation=true
Done:
- rest requests extended with token (automatically injected to $http via satellizer)
- if not logged and a privileged action occurs (e.g. retrieve files) - automatic redirection (via custom interceptor)
- sign up screen (functionality)
- login screen (functionality)
- logout button
- password change + confirmation token
- displaying login/signup buttons if not logged in, displaying logout button if logged in (extend json with "condition: "$scope.logged"" or something like that for each button, and test it)
Remains:
- testing
|
non_defect
|
implement user management on the client side first do only some research on the topic expected solution should be based on token based authentication check satellizer done rest requests extended with token automatically injected to http via satellizer if not logged and a privileged action occurs e g retrieve files automatic redirection via custom interceptor sign up screen functionality login screen functionality logout button password change confirmation token displaying login signup buttons if not logged in displaying logout button if logged in extend json with condition scope logged or something like that for each button and test it remains testing
| 0
|
433,173
| 12,503,213,568
|
IssuesEvent
|
2020-06-02 06:45:36
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
us12.campaign-archive.com - see bug description
|
browser-focus-geckoview engine-gecko priority-normal
|
<!-- @browser: Firefox Mobile 76.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:76.0) Gecko/76.0 Firefox/76.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/53489 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://us12.campaign-archive.com/home/?u=90074e93f1efe14ac0bdb3639
**Browser / Version**: Firefox Mobile 76.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Error
**Steps to Reproduce**:
Error image
Server status
We couldn't process your request at this time. Please try again later. If you are seeing this message repeatedly, please contact Support with the following information:
ip:
date: Fri May 29 2020 16:46:15 GMT+0200 (GMT+02:00)
url: https://us12.campaign-archive.com/home/?u=90074e93f1efe14ac0bdb3639&id=5b0d0a0efe
user agent: Mozilla/5.0 (Android 7.0; Mobile; rv:76.0) Gecko/76.0 Firefox/76.0
We're sorry for the inconvenience and appreciate your patience while we get everything straightened out.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
us12.campaign-archive.com - see bug description - <!-- @browser: Firefox Mobile 76.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:76.0) Gecko/76.0 Firefox/76.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/53489 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://us12.campaign-archive.com/home/?u=90074e93f1efe14ac0bdb3639
**Browser / Version**: Firefox Mobile 76.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Error
**Steps to Reproduce**:
Error image
Server status
We couldn't process your request at this time. Please try again later. If you are seeing this message repeatedly, please contact Support with the following information:
ip:
date: Fri May 29 2020 16:46:15 GMT+0200 (GMT+02:00)
url: https://us12.campaign-archive.com/home/?u=90074e93f1efe14ac0bdb3639&id=5b0d0a0efe
user agent: Mozilla/5.0 (Android 7.0; Mobile; rv:76.0) Gecko/76.0 Firefox/76.0
We're sorry for the inconvenience and appreciate your patience while we get everything straightened out.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
campaign archive com see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description error steps to reproduce error image server status we couldn t process your request at this time please try again later if you are seeing this message repeatedly please contact support with the following information ip date fri may gmt gmt url user agent mozilla android mobile rv gecko firefox we re sorry for the inconvenience and appreciate your patience while we get everything straightened out browser configuration none from with ❤️
| 0
|
117,741
| 25,190,460,431
|
IssuesEvent
|
2022-11-11 23:48:04
|
iree-org/iree
|
https://api.github.com/repos/iree-org/iree
|
closed
|
Optimize compile times
|
bug 🐞 help wanted codegen
|
Trying to compile https://storage.googleapis.com/shark-public/18B.mlir
It is mostly single threaded. Test run on a n2-highmem-96 GCP
````
(iree-samples.venv) anush@anush-large-nvme:~/data/anush/iree-samples/ModelCompiler/nlp_models$ time /home/anush/data/anush/iree-samples/iree-samples.venv/lib/python3.9/site-packages/iree/compiler/tools/../_mlir_libs/iree-compile --iree-input-type=mhlo --iree-vm-bytecode-module-output-format=flatbuffer-binary --iree-hal-target-backends=dylib-llvm-aot --iree-mlir-to-vm-bytecode-module --iree-llvm-embedded-linker-path=/home/anush/data/anush/iree-samples/iree-samples.venv/lib/python3.9/site-packages/iree/compiler/tools/../_mlir_libs/iree-lld --mlir-print-debuginfo --mlir-print-op-on-diagnostic=false --iree-llvm-target-cpu-features=host --iree-mhlo-demote-i64-to-i32=false --iree-stream-resource-index-bits=64 --iree-vm-target-index-bits=64 ./18B.mlir -o 18B.vmfb
real 67m59.146s
user 67m13.634s
sys 2m5.269s
````
|
1.0
|
Optimize compile times - Trying to compile https://storage.googleapis.com/shark-public/18B.mlir
It is mostly single threaded. Test run on a n2-highmem-96 GCP
````
(iree-samples.venv) anush@anush-large-nvme:~/data/anush/iree-samples/ModelCompiler/nlp_models$ time /home/anush/data/anush/iree-samples/iree-samples.venv/lib/python3.9/site-packages/iree/compiler/tools/../_mlir_libs/iree-compile --iree-input-type=mhlo --iree-vm-bytecode-module-output-format=flatbuffer-binary --iree-hal-target-backends=dylib-llvm-aot --iree-mlir-to-vm-bytecode-module --iree-llvm-embedded-linker-path=/home/anush/data/anush/iree-samples/iree-samples.venv/lib/python3.9/site-packages/iree/compiler/tools/../_mlir_libs/iree-lld --mlir-print-debuginfo --mlir-print-op-on-diagnostic=false --iree-llvm-target-cpu-features=host --iree-mhlo-demote-i64-to-i32=false --iree-stream-resource-index-bits=64 --iree-vm-target-index-bits=64 ./18B.mlir -o 18B.vmfb
real 67m59.146s
user 67m13.634s
sys 2m5.269s
````
|
non_defect
|
optimize compile times trying to compile it is mostly single threaded test run on a highmem gcp iree samples venv anush anush large nvme data anush iree samples modelcompiler nlp models time home anush data anush iree samples iree samples venv lib site packages iree compiler tools mlir libs iree compile iree input type mhlo iree vm bytecode module output format flatbuffer binary iree hal target backends dylib llvm aot iree mlir to vm bytecode module iree llvm embedded linker path home anush data anush iree samples iree samples venv lib site packages iree compiler tools mlir libs iree lld mlir print debuginfo mlir print op on diagnostic false iree llvm target cpu features host iree mhlo demote to false iree stream resource index bits iree vm target index bits mlir o vmfb real user sys
| 0
|
69,912
| 22,747,014,209
|
IssuesEvent
|
2022-07-07 10:02:11
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
No line change after pressing enter key.
|
T-Defect
|
### Steps to reproduce
1. I am working with Element Web? I wish to change lines between my paragaraphs or texts which I send ?
2. Upon pressing enter , it doesnot change a line but sends the text?
3. More steps…
### Outcome
#### I expected to have a line change.
#### The text was sent.
### Operating system
Windows
### Browser information
Google Chrome
### URL for webapp
https://app.element.io/?pk_vid=7f9093a31c6495bd1657086423fecff9#/room/!TBbOijmDPCVCuwlYLW:matrix.org
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
No line change after pressing enter key. - ### Steps to reproduce
1. I am working with Element Web? I wish to change lines between my paragaraphs or texts which I send ?
2. Upon pressing enter , it doesnot change a line but sends the text?
3. More steps…
### Outcome
#### I expected to have a line change.
#### The text was sent.
### Operating system
Windows
### Browser information
Google Chrome
### URL for webapp
https://app.element.io/?pk_vid=7f9093a31c6495bd1657086423fecff9#/room/!TBbOijmDPCVCuwlYLW:matrix.org
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
no line change after pressing enter key steps to reproduce i am working with element web i wish to change lines between my paragaraphs or texts which i send upon pressing enter it doesnot change a line but sends the text more steps… outcome i expected to have a line change the text was sent operating system windows browser information google chrome url for webapp application version no response homeserver no response will you send logs no
| 1
|
99,820
| 12,479,248,329
|
IssuesEvent
|
2020-05-29 17:54:50
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
closed
|
HttpParams gets converted to space for + sign, Any fix to resolve this issue
|
comp: common/http design complexity: low-hanging flag: breaking change state: confirmed triage #1 type: bug/fix
|
navigateToDashboard(ngForm: NgForm){
var hash = CryptoJS.SHA1(ngForm.value.password);
var txtPassBase64 = hash.toString(CryptoJS.enc.Base64);
var txtPassHexValue = hash.toString(CryptoJS.enc.Hex);
hash = CryptoJS.MD5(ngForm.value.password);
let txtMD5Base64 = hash.toString(CryptoJS.enc.Base64)
const body = new HttpParams()
.set('txtUserId', '10015625')
.set('txtMD5Base64', txtMD5Base64)//4QrcOUm6Wau+VuBX8g+IPg==
.set('language', 'E');
console.log("body",body);
this.httpClient.post(this.postUrl,body,this.httpOptions).subscribe(() => {
console.log("Hi");
})
}
The final txtMD5Base64 loooks like 4QrcOUm6Wau VuBX8g IPg==(plus symbols are replaced with
space). It should look like this while submitting a request => 4QrcOUm6Wau+VuBX8g+IPg== but looks like this in the HttpParams => 4QrcOUm6Wau VuBX8g IPg==. Its a complete nightmare for me to resolve the issue. Please suggest a solution for this
|
1.0
|
HttpParams gets converted to space for + sign, Any fix to resolve this issue - navigateToDashboard(ngForm: NgForm){
var hash = CryptoJS.SHA1(ngForm.value.password);
var txtPassBase64 = hash.toString(CryptoJS.enc.Base64);
var txtPassHexValue = hash.toString(CryptoJS.enc.Hex);
hash = CryptoJS.MD5(ngForm.value.password);
let txtMD5Base64 = hash.toString(CryptoJS.enc.Base64)
const body = new HttpParams()
.set('txtUserId', '10015625')
.set('txtMD5Base64', txtMD5Base64)//4QrcOUm6Wau+VuBX8g+IPg==
.set('language', 'E');
console.log("body",body);
this.httpClient.post(this.postUrl,body,this.httpOptions).subscribe(() => {
console.log("Hi");
})
}
The final txtMD5Base64 loooks like 4QrcOUm6Wau VuBX8g IPg==(plus symbols are replaced with
space). It should look like this while submitting a request => 4QrcOUm6Wau+VuBX8g+IPg== but looks like this in the HttpParams => 4QrcOUm6Wau VuBX8g IPg==. Its a complete nightmare for me to resolve the issue. Please suggest a solution for this
|
non_defect
|
httpparams gets converted to space for sign any fix to resolve this issue navigatetodashboard ngform ngform var hash cryptojs ngform value password var hash tostring cryptojs enc var txtpasshexvalue hash tostring cryptojs enc hex hash cryptojs ngform value password let hash tostring cryptojs enc const body new httpparams set txtuserid set ipg set language e console log body body this httpclient post this posturl body this httpoptions subscribe console log hi the final loooks like ipg plus symbols are replaced with space it should look like this while submitting a request ipg but looks like this in the httpparams ipg its a complete nightmare for me to resolve the issue please suggest a solution for this
| 0
|
57,940
| 16,176,524,764
|
IssuesEvent
|
2021-05-03 07:48:59
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
closed
|
dnsdist: DynBlockGroupRules : eBPF block doesn't clear when ban expires
|
defect dnsdist
|
- Program: dnsdist
- Issue type: Bug report
- Operating system: Debian buster
- Software version: dnsdist-1.6.0-rc1
- Deb package
### Short description
Follow up to https://github.com/PowerDNS/pdns/pull/9782
Since 1.6 ,when dnsdist creates a DynBlock, I understand it also creates an eBPF block in the same time.
When the user space dynblock expires, the bpf block seems to remain and keeps blocking incoming queries.
### Steps to reproduce
Conf to reproduce :
```
bpf = newBPFFilter(1048576, 1048576, 32)
setDefaultBPFFilter(bpf)
local dbr = dynBlockRulesGroup()
dbr:setQueryRate(10, 1, "Exceeded query rate", 60)
function maintenance()
dbr:apply()
end
```
### Expected behaviour
When a dynblock is created for an IP, for the next 60 seconds, I expect to see :
- `showDynblocks()` showing 1 block
- `bpf:getStats()` showing 1 block too.
- On the dnsdist web UI, I should see 1 IP in the "**Dyn blocked netmask**" field and that same IP in the "**Kernel-based dyn blocked netmask**" field
After 60 seconds, these blocks should be removed if the source IP doesn't generate more than 10 qps
### Actual behaviour
I actually see :
- `showDynblocks()` showing 1 block
- `bpf:getStats()` showing 1 block too.
- On the dnsdist web UI, the blocked IP is only displayed in the "**Dyn blocked netmask**" field
After 60 seconds:
- showDynblocks() shows no more dynblock - that makes sense
- bpf:getStats() keeps showing that 1 block too.
Queries are blocked because of that remaining eBPF dynblock too.
This eBPF block is usually cleared ~4/5 minutes after though, Is there a minimal value hardcoded somewhere here ?
|
1.0
|
dnsdist: DynBlockGroupRules : eBPF block doesn't clear when ban expires - - Program: dnsdist
- Issue type: Bug report
- Operating system: Debian buster
- Software version: dnsdist-1.6.0-rc1
- Deb package
### Short description
Follow up to https://github.com/PowerDNS/pdns/pull/9782
Since 1.6 ,when dnsdist creates a DynBlock, I understand it also creates an eBPF block in the same time.
When the user space dynblock expires, the bpf block seems to remain and keeps blocking incoming queries.
### Steps to reproduce
Conf to reproduce :
```
bpf = newBPFFilter(1048576, 1048576, 32)
setDefaultBPFFilter(bpf)
local dbr = dynBlockRulesGroup()
dbr:setQueryRate(10, 1, "Exceeded query rate", 60)
function maintenance()
dbr:apply()
end
```
### Expected behaviour
When a dynblock is created for an IP, for the next 60 seconds, I expect to see :
- `showDynblocks()` showing 1 block
- `bpf:getStats()` showing 1 block too.
- On the dnsdist web UI, I should see 1 IP in the "**Dyn blocked netmask**" field and that same IP in the "**Kernel-based dyn blocked netmask**" field
After 60 seconds, these blocks should be removed if the source IP doesn't generate more than 10 qps
### Actual behaviour
I actually see :
- `showDynblocks()` showing 1 block
- `bpf:getStats()` showing 1 block too.
- On the dnsdist web UI, the blocked IP is only displayed in the "**Dyn blocked netmask**" field
After 60 seconds:
- showDynblocks() shows no more dynblock - that makes sense
- bpf:getStats() keeps showing that 1 block too.
Queries are blocked because of that remaining eBPF dynblock too.
This eBPF block is usually cleared ~4/5 minutes after though, Is there a minimal value hardcoded somewhere here ?
|
defect
|
dnsdist dynblockgrouprules ebpf block doesn t clear when ban expires program dnsdist issue type bug report operating system debian buster software version dnsdist deb package short description follow up to since when dnsdist creates a dynblock i understand it also creates an ebpf block in the same time when the user space dynblock expires the bpf block seems to remain and keeps blocking incoming queries steps to reproduce conf to reproduce bpf newbpffilter setdefaultbpffilter bpf local dbr dynblockrulesgroup dbr setqueryrate exceeded query rate function maintenance dbr apply end expected behaviour when a dynblock is created for an ip for the next seconds i expect to see showdynblocks showing block bpf getstats showing block too on the dnsdist web ui i should see ip in the dyn blocked netmask field and that same ip in the kernel based dyn blocked netmask field after seconds these blocks should be removed if the source ip doesn t generate more than qps actual behaviour i actually see showdynblocks showing block bpf getstats showing block too on the dnsdist web ui the blocked ip is only displayed in the dyn blocked netmask field after seconds showdynblocks shows no more dynblock that makes sense bpf getstats keeps showing that block too queries are blocked because of that remaining ebpf dynblock too this ebpf block is usually cleared minutes after though is there a minimal value hardcoded somewhere here
| 1
|
25,824
| 4,467,824,753
|
IssuesEvent
|
2016-08-25 07:04:45
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
Panel doesn't open with ALT+Down arrow keys on SelectOneMenu
|
5.3.17 6.0.4 defect
|
SelectOneMenu component doesn't show behavior of html <select> tag with ALT+Down arrow keys.
|
1.0
|
Panel doesn't open with ALT+Down arrow keys on SelectOneMenu - SelectOneMenu component doesn't show behavior of html <select> tag with ALT+Down arrow keys.
|
defect
|
panel doesn t open with alt down arrow keys on selectonemenu selectonemenu component doesn t show behavior of html tag with alt down arrow keys
| 1
|
30,094
| 14,403,595,563
|
IssuesEvent
|
2020-12-03 16:14:50
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
Support conversion from JSON types to others
|
challenge-program high-performance picked sig/DDL
|
## Description
### Background
With the development of the implementation of column type change, we have broken the process into two phases.
1. implement the architecture and column type change between the same type. [phase 1](https://github.com/pingcap/tidb/issues/19116)
2. implement the column type change between the different types and some TODO lists. [phase 2](https://github.com/pingcap/tidb/issues/19939)
### Problem
Json Types includes:
- Json
...
Support conversion from Time types to others
For example:
- Json to string
...
### Solution
Generally speaking, these changes may all need the reorganize the data, you can refer to the previous issue in phase 1 for the implementation of architecture details.
In the process of your development, maybe you should also take the SQL Mode into consideration, handling some warnings or errors.
You can port some MySQL tests into TiDB for compatibility tests.
### Score
- 600
### Mentor
- AilinKid
## Recommended Skills
[TiDB DDL architecture](https://github.com/pingcap/tidb/blob/master/docs/design/2018-10-08-online-DDL.md)
|
True
|
Support conversion from JSON types to others - ## Description
### Background
With the development of the implementation of column type change, we have broken the process into two phases.
1. implement the architecture and column type change between the same type. [phase 1](https://github.com/pingcap/tidb/issues/19116)
2. implement the column type change between the different types and some TODO lists. [phase 2](https://github.com/pingcap/tidb/issues/19939)
### Problem
Json Types includes:
- Json
...
Support conversion from Time types to others
For example:
- Json to string
...
### Solution
Generally speaking, these changes may all need the reorganize the data, you can refer to the previous issue in phase 1 for the implementation of architecture details.
In the process of your development, maybe you should also take the SQL Mode into consideration, handling some warnings or errors.
You can port some MySQL tests into TiDB for compatibility tests.
### Score
- 600
### Mentor
- AilinKid
## Recommended Skills
[TiDB DDL architecture](https://github.com/pingcap/tidb/blob/master/docs/design/2018-10-08-online-DDL.md)
|
non_defect
|
support conversion from json types to others description background with the development of the implementation of column type change we have broken the process into two phases implement the architecture and column type change between the same type implement the column type change between the different types and some todo lists problem json types includes json support conversion from time types to others for example json to string solution generally speaking these changes may all need the reorganize the data you can refer to the previous issue in phase for the implementation of architecture details in the process of your development maybe you should also take the sql mode into consideration handling some warnings or errors you can port some mysql tests into tidb for compatibility tests score mentor ailinkid recommended skills
| 0
|
66,328
| 20,153,328,356
|
IssuesEvent
|
2022-02-09 14:25:29
|
idaholab/raven
|
https://api.github.com/repos/idaholab/raven
|
opened
|
[DEFECT] Some EconomicRatio Metrics
|
priority_normal defect
|
### Thank you for the defect report
- [X] I am using the latest version of `RAVEN`.
- [X] I have read the [Wiki](https://github.com/idaholab/raven/wiki).
- [X] I have created a [minimum, reproducible example](https://stackoverflow.com/help/minimal-reproducible-example)
that demonstrates the defect.
### Defect Description
Requesting sortinoRatio or gainLossRatio from the EconomicRatio PostProcessor with inputs/outputs containing a pivotParameter (HistorySet or DataSet) result in an error. All other metrics (sharpeRatio, expectedShortfall, and valueAtRisk) can handle these inputs.
### Steps to Reproduce
Use HistorySet data as input to EconomicRatio PostProcessor and request sortinoRatio or gainLossRatio metrics.
### Expected Behavior
Calculate the requested metrics and return them as HistorySet or DataSet.
### Screenshots and Input Files
_No response_
### OS
Windows
### OS Version
_No response_
### Dependency Manager
CONDA
### For Change Control Board: Issue Review
- [X] Is it tagged with a type: defect or task?
- [X] Is it tagged with a priority: critical, normal or minor?
- [X] If it will impact requirements or requirements tests, is it tagged with requirements?
- [X] If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [X] Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
### For Change Control Board: Issue Closure
- [ ] If the issue is a defect, is the defect fixed?
- [ ] If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [ ] If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
1.0
|
[DEFECT] Some EconomicRatio Metrics - ### Thank you for the defect report
- [X] I am using the latest version of `RAVEN`.
- [X] I have read the [Wiki](https://github.com/idaholab/raven/wiki).
- [X] I have created a [minimum, reproducible example](https://stackoverflow.com/help/minimal-reproducible-example)
that demonstrates the defect.
### Defect Description
Requesting sortinoRatio or gainLossRatio from the EconomicRatio PostProcessor with inputs/outputs containing a pivotParameter (HistorySet or DataSet) result in an error. All other metrics (sharpeRatio, expectedShortfall, and valueAtRisk) can handle these inputs.
### Steps to Reproduce
Use HistorySet data as input to EconomicRatio PostProcessor and request sortinoRatio or gainLossRatio metrics.
### Expected Behavior
Calculate the requested metrics and return them as HistorySet or DataSet.
### Screenshots and Input Files
_No response_
### OS
Windows
### OS Version
_No response_
### Dependency Manager
CONDA
### For Change Control Board: Issue Review
- [X] Is it tagged with a type: defect or task?
- [X] Is it tagged with a priority: critical, normal or minor?
- [X] If it will impact requirements or requirements tests, is it tagged with requirements?
- [X] If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [X] Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
### For Change Control Board: Issue Closure
- [ ] If the issue is a defect, is the defect fixed?
- [ ] If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [ ] If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
defect
|
some economicratio metrics thank you for the defect report i am using the latest version of raven i have read the i have created a that demonstrates the defect defect description requesting sortinoratio or gainlossratio from the economicratio postprocessor with inputs outputs containing a pivotparameter historyset or dataset result in an error all other metrics sharperatio expectedshortfall and valueatrisk can handle these inputs steps to reproduce use historyset data as input to economicratio postprocessor and request sortinoratio or gainlossratio metrics expected behavior calculate the requested metrics and return them as historyset or dataset screenshots and input files no response os windows os version no response dependency manager conda for change control board issue review is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
| 1
|
73,740
| 24,782,382,375
|
IssuesEvent
|
2022-10-24 06:50:26
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Room list not updating unread state with new messages, again
|
T-Defect X-Cannot-Reproduce X-Regression S-Major A-Room-List O-Uncommon A-Threads Z-ThreadsNotifications
|
### Steps to reproduce
Unclear. Here are the observed steps:
1. Mark rooms as read
2. Receive message in encrypted room
3. Room moves up as "active", but doesn't get an unread count/badge/status
### Outcome
#### What did you expect?
The room to behave and mark itself correctly as unread (via count or via bold status)
#### What happened instead?
A repeat of https://github.com/vector-im/element-web/issues/20859
### Operating system
Windows 10
### Application version
Nightly (2022-03-14 patch 01)
### How did you install the app?
The Internet
### Homeserver
t2l.io
### Will you send logs?
Yes
|
1.0
|
Room list not updating unread state with new messages, again - ### Steps to reproduce
Unclear. Here are the observed steps:
1. Mark rooms as read
2. Receive message in encrypted room
3. Room moves up as "active", but doesn't get an unread count/badge/status
### Outcome
#### What did you expect?
The room to behave and mark itself correctly as unread (via count or via bold status)
#### What happened instead?
A repeat of https://github.com/vector-im/element-web/issues/20859
### Operating system
Windows 10
### Application version
Nightly (2022-03-14 patch 01)
### How did you install the app?
The Internet
### Homeserver
t2l.io
### Will you send logs?
Yes
|
defect
|
room list not updating unread state with new messages again steps to reproduce unclear here are the observed steps mark rooms as read receive message in encrypted room room moves up as active but doesn t get an unread count badge status outcome what did you expect the room to behave and mark itself correctly as unread via count or via bold status what happened instead a repeat of operating system windows application version nightly patch how did you install the app the internet homeserver io will you send logs yes
| 1
|
110,546
| 9,460,289,277
|
IssuesEvent
|
2019-04-17 10:34:05
|
HGustavs/LenaSYS
|
https://api.github.com/repos/HGustavs/LenaSYS
|
opened
|
Dropdown menu buttons gets error on accessed.php
|
W17_test
|


Hovering over these dropdown menu buttons results in these errors. Might be a consequence to issue #6027
|
1.0
|
Dropdown menu buttons gets error on accessed.php -


Hovering over these dropdown menu buttons results in these errors. Might be a consequence to issue #6027
|
non_defect
|
dropdown menu buttons gets error on accessed php hovering over these dropdown menu buttons results in these errors might be a consequence to issue
| 0
|
21,845
| 3,573,179,133
|
IssuesEvent
|
2016-01-27 04:12:40
|
gperftools/gperftools
|
https://api.github.com/repos/gperftools/gperftools
|
closed
|
Profiling timer always armed on initialization
|
Priority-Medium Status-Accepted Type-Defect
|
Originally reported on Google Code with ID 406
```
The code currently arms the profiling timer unconditionally in ProfileHandler::RegisterThread
for timer sharing detection. Unfortunately, this means the profiling timer will stay
active, even if profiling is inactive and we have shared timers in case if no other
threads get registered later.
This poses a problem for Chromium, which links the profiling code into the executable.
See http://code.google.com/p/chromium/issues/detail?id=115149
```
Reported by `mnissler@google.com` on 2012-02-21 20:50:07
|
1.0
|
Profiling timer always armed on initialization - Originally reported on Google Code with ID 406
```
The code currently arms the profiling timer unconditionally in ProfileHandler::RegisterThread
for timer sharing detection. Unfortunately, this means the profiling timer will stay
active, even if profiling is inactive and we have shared timers in case if no other
threads get registered later.
This poses a problem for Chromium, which links the profiling code into the executable.
See http://code.google.com/p/chromium/issues/detail?id=115149
```
Reported by `mnissler@google.com` on 2012-02-21 20:50:07
|
defect
|
profiling timer always armed on initialization originally reported on google code with id the code currently arms the profiling timer unconditionally in profilehandler registerthread for timer sharing detection unfortunately this means the profiling timer will stay active even if profiling is inactive and we have shared timers in case if no other threads get registered later this poses a problem for chromium which links the profiling code into the executable see reported by mnissler google com on
| 1
|
6,198
| 6,229,434,114
|
IssuesEvent
|
2017-07-11 03:53:28
|
twosigma/beakerx
|
https://api.github.com/repos/twosigma/beakerx
|
opened
|
replace single jars with directories of class files?
|
Infrastructure Kernel
|
would make it easier to add a jar to the app (eg for SQL drivers).
and might make the build go faster.
|
1.0
|
replace single jars with directories of class files? - would make it easier to add a jar to the app (eg for SQL drivers).
and might make the build go faster.
|
non_defect
|
replace single jars with directories of class files would make it easier to add a jar to the app eg for sql drivers and might make the build go faster
| 0
|
57,302
| 15,729,909,513
|
IssuesEvent
|
2021-03-29 15:20:37
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
false positive::resource leak (Trac #286)
|
False positive Incomplete Migration Migrated from Trac defect hyd_danmar
|
Migrated from https://trac.cppcheck.net/ticket/286
```json
{
"status": "closed",
"changetime": "2009-05-06T19:32:57",
"description": "Hello friends,\n\n{{{\n#include <stdio.h>\n#include <iostream>\n\nint main()\n{\n\tFILE *f;\n\tbool retval = false;\n\n\tif ((f = popen (\"test\", \"w\")) == NULL)\n\t \tstd::cout << \"error\" << std::endl;\n\telse \n\t\tretval &= (pclose (f) == 0);\n\t\n\treturn retval;\n}\n\n}}}\n\ncppcheck says:\n\n{{{\n\n$ cppcheck -a -q -j2 test.cpp\n[test.cpp:14]: (error) Resource leak: f\n}}}\n\nBut the resouce is freed with pclose().\n\n\npopen and pclose are here described:\nhttp://www.opengroup.org/onlinepubs/009695399/functions/popen.html\n\nBest regards\n\nMartin\n\n",
"reporter": "ettlmartin",
"cc": "",
"resolution": "fixed",
"_ts": "1241638377000000",
"component": "False positive",
"summary": "false positive::resource leak",
"priority": "",
"keywords": "",
"time": "2009-05-05T21:29:11",
"milestone": "1.32",
"owner": "hyd_danmar",
"type": "defect"
}
```
|
1.0
|
false positive::resource leak (Trac #286) - Migrated from https://trac.cppcheck.net/ticket/286
```json
{
"status": "closed",
"changetime": "2009-05-06T19:32:57",
"description": "Hello friends,\n\n{{{\n#include <stdio.h>\n#include <iostream>\n\nint main()\n{\n\tFILE *f;\n\tbool retval = false;\n\n\tif ((f = popen (\"test\", \"w\")) == NULL)\n\t \tstd::cout << \"error\" << std::endl;\n\telse \n\t\tretval &= (pclose (f) == 0);\n\t\n\treturn retval;\n}\n\n}}}\n\ncppcheck says:\n\n{{{\n\n$ cppcheck -a -q -j2 test.cpp\n[test.cpp:14]: (error) Resource leak: f\n}}}\n\nBut the resouce is freed with pclose().\n\n\npopen and pclose are here described:\nhttp://www.opengroup.org/onlinepubs/009695399/functions/popen.html\n\nBest regards\n\nMartin\n\n",
"reporter": "ettlmartin",
"cc": "",
"resolution": "fixed",
"_ts": "1241638377000000",
"component": "False positive",
"summary": "false positive::resource leak",
"priority": "",
"keywords": "",
"time": "2009-05-05T21:29:11",
"milestone": "1.32",
"owner": "hyd_danmar",
"type": "defect"
}
```
|
defect
|
false positive resource leak trac migrated from json status closed changetime description hello friends n n n include n include n nint main n n tfile f n tbool retval false n n tif f popen test w null n t tstd cout error std endl n telse n t tretval pclose f n t n treturn retval n n n n ncppcheck says n n n n cppcheck a q test cpp n error resource leak f n n nbut the resouce is freed with pclose n n npopen and pclose are here described n regards n nmartin n n reporter ettlmartin cc resolution fixed ts component false positive summary false positive resource leak priority keywords time milestone owner hyd danmar type defect
| 1
|
62,992
| 17,293,590,854
|
IssuesEvent
|
2021-07-25 09:11:17
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Call View UI/UX issues
|
A-VoIP P1 T-Defect X-Blocked
|
+ [ ] The colour of the buttons on OFF mode is wrong
+ [ ] The dial pad should not be shown for regular users
+ [ ] On hover, the controls disappear very quickly, can the transition be set to disappear less quickly? (see competitors) (also https://github.com/vector-im/element-web/issues/16751)
+ [ ] The header uses the wrong icons (see figma.com/file/V6m2z0oAtUV1l8MdyIrAep/VoIP?node-id=2904%3A48322)
Blocked on https://github.com/matrix-org/matrix-react-sdk/pull/5992
|
1.0
|
Call View UI/UX issues - + [ ] The colour of the buttons on OFF mode is wrong
+ [ ] The dial pad should not be shown for regular users
+ [ ] On hover, the controls disappear very quickly, can the transition be set to disappear less quickly? (see competitors) (also https://github.com/vector-im/element-web/issues/16751)
+ [ ] The header uses the wrong icons (see figma.com/file/V6m2z0oAtUV1l8MdyIrAep/VoIP?node-id=2904%3A48322)
Blocked on https://github.com/matrix-org/matrix-react-sdk/pull/5992
|
defect
|
call view ui ux issues the colour of the buttons on off mode is wrong the dial pad should not be shown for regular users on hover the controls disappear very quickly can the transition be set to disappear less quickly see competitors also the header uses the wrong icons see figma com file voip node id blocked on
| 1
|
31,513
| 6,543,167,723
|
IssuesEvent
|
2017-09-02 18:16:47
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
closed
|
Returning a future<R> where R is not default-constructable broken
|
category: actions category: LCOs type: defect
|
@krivenko wrote (see #863):
Are return types of actions supposed to be default constructible?
Here is an extended version of `tests/unit/actions/return_future.cpp`
https://gist.github.com/krivenko/9e61c80e6adf4cd375af239f723dc752
`test_plain_call_future_non_default_ctor()` fails to compile with the following errors,
```
tuple.hpp:72:24: error: use of deleted function ‘non_default_ctor::non_default_ctor()’
```
```
base_lco_with_value.hpp:85:22: error: use of deleted function non_default_ctor::non_default_ctor()
```
|
1.0
|
Returning a future<R> where R is not default-constructable broken - @krivenko wrote (see #863):
Are return types of actions supposed to be default constructible?
Here is an extended version of `tests/unit/actions/return_future.cpp`
https://gist.github.com/krivenko/9e61c80e6adf4cd375af239f723dc752
`test_plain_call_future_non_default_ctor()` fails to compile with the following errors,
```
tuple.hpp:72:24: error: use of deleted function ‘non_default_ctor::non_default_ctor()’
```
```
base_lco_with_value.hpp:85:22: error: use of deleted function non_default_ctor::non_default_ctor()
```
|
defect
|
returning a future where r is not default constructable broken krivenko wrote see are return types of actions supposed to be default constructible here is an extended version of tests unit actions return future cpp test plain call future non default ctor fails to compile with the following errors tuple hpp error use of deleted function ‘non default ctor non default ctor ’ base lco with value hpp error use of deleted function non default ctor non default ctor
| 1
|
180,692
| 13,943,103,270
|
IssuesEvent
|
2020-10-22 22:18:34
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Nginx-ingress-controller did not remove ingress ip after node is powered down
|
[zube]: To Test area/ingress internal kind/bug
|
**Rancher versions:**
rancher/rancher:cee0b2b
**Docker version: (`docker version`,`docker info` preferred)**
17.03.2-ce
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)**
ubuntu 16.04
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**
AWS
**Steps to Reproduce:**
1. Add 2 worker nodes to the cluster
1. update `setting.ingress-ip-domain` to `lb.rancher.cloud`
2. Create ingress, then I can see two ingress ip
```
status:
loadBalancer:
ingress:
- ip: 172.31.10.135
- ip: 172.31.13.78
```
3. Shutdown one of the worker nodes (e.g. 172.31.13.78)
**Results:**
An hour later, I noticed that nginx-ingress-controller did not remove ingress ip 172.31.13.78
```
# kubectl get ing nginx-lb -o yaml
...
...
...
status:
loadBalancer:
ingress:
- ip: 172.31.10.135
- ip: 172.31.13.78
```
gzrancher/rancher#10761
|
1.0
|
Nginx-ingress-controller did not remove ingress ip after node is powered down - **Rancher versions:**
rancher/rancher:cee0b2b
**Docker version: (`docker version`,`docker info` preferred)**
17.03.2-ce
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)**
ubuntu 16.04
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**
AWS
**Steps to Reproduce:**
1. Add 2 worker nodes to the cluster
1. update `setting.ingress-ip-domain` to `lb.rancher.cloud`
2. Create ingress, then I can see two ingress ip
```
status:
loadBalancer:
ingress:
- ip: 172.31.10.135
- ip: 172.31.13.78
```
3. Shutdown one of the worker nodes (e.g. 172.31.13.78)
**Results:**
An hour later, I noticed that nginx-ingress-controller did not remove ingress ip 172.31.13.78
```
# kubectl get ing nginx-lb -o yaml
...
...
...
status:
loadBalancer:
ingress:
- ip: 172.31.10.135
- ip: 172.31.13.78
```
gzrancher/rancher#10761
|
non_defect
|
nginx ingress controller did not remove ingress ip after node is powered down rancher versions rancher rancher docker version docker version docker info preferred ce operating system and kernel cat etc os release uname r preferred ubuntu type provider of hosts virtualbox bare metal aws gce do aws steps to reproduce add worker nodes to the cluster update setting ingress ip domain to lb rancher cloud create ingress then i can see two ingress ip status loadbalancer ingress ip ip shutdown one of the worker nodes e g results an hour later i noticed that nginx ingress controller did not remove ingress ip kubectl get ing nginx lb o yaml status loadbalancer ingress ip ip gzrancher rancher
| 0
|
3,039
| 2,607,970,159
|
IssuesEvent
|
2015-02-26 00:44:13
|
chrsmithdemos/leveldb
|
https://api.github.com/repos/chrsmithdemos/leveldb
|
opened
|
snappy
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.i write a c++ program.
2.in linux ,g++ -o sa Main.cpp libleveldb.a -lpthread
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
leveldb 1.14.0, red hat
Please provide any additional information below.
this is problem:
libleveldb.a(table_builder.o): In function
`leveldb::TableBuilder::WriteBlock(leveldb::BlockBuilder*,
leveldb::BlockHandle*)':
table_builder.cc:(.text+0x6a8): undefined reference to
`snappy::MaxCompressedLength(unsigned long)'
table_builder.cc:(.text+0x6e2): undefined reference to
`snappy::RawCompress(char const*, unsigned long, char*, unsigned long*)'
libleveldb.a(format.o): In function
`leveldb::ReadBlock(leveldb::RandomAccessFile*, leveldb::ReadOptions const&,
leveldb::BlockHandle const&, leveldb::BlockContents*)':
format.cc:(.text+0x5de): undefined reference to
`snappy::GetUncompressedLength(char const*, unsigned long, unsigned long*)'
format.cc:(.text+0x64e): undefined reference to `snappy::RawUncompress(char
const*, unsigned long, char*)'
collect2: ld returned 1 exit status
i think may be snappy problem,but i dont kown what to do
```
-----
Original issue reported on code.google.com by `wyy...@gmail.com` on 7 May 2014 at 1:32
|
1.0
|
snappy - ```
What steps will reproduce the problem?
1.i write a c++ program.
2.in linux ,g++ -o sa Main.cpp libleveldb.a -lpthread
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
leveldb 1.14.0, red hat
Please provide any additional information below.
this is problem:
libleveldb.a(table_builder.o): In function
`leveldb::TableBuilder::WriteBlock(leveldb::BlockBuilder*,
leveldb::BlockHandle*)':
table_builder.cc:(.text+0x6a8): undefined reference to
`snappy::MaxCompressedLength(unsigned long)'
table_builder.cc:(.text+0x6e2): undefined reference to
`snappy::RawCompress(char const*, unsigned long, char*, unsigned long*)'
libleveldb.a(format.o): In function
`leveldb::ReadBlock(leveldb::RandomAccessFile*, leveldb::ReadOptions const&,
leveldb::BlockHandle const&, leveldb::BlockContents*)':
format.cc:(.text+0x5de): undefined reference to
`snappy::GetUncompressedLength(char const*, unsigned long, unsigned long*)'
format.cc:(.text+0x64e): undefined reference to `snappy::RawUncompress(char
const*, unsigned long, char*)'
collect2: ld returned 1 exit status
i think may be snappy problem,but i dont kown what to do
```
-----
Original issue reported on code.google.com by `wyy...@gmail.com` on 7 May 2014 at 1:32
|
defect
|
snappy what steps will reproduce the problem i write a c program in linux g o sa main cpp libleveldb a lpthread what is the expected output what do you see instead what version of the product are you using on what operating system leveldb red hat please provide any additional information below this is problem libleveldb a table builder o in function leveldb tablebuilder writeblock leveldb blockbuilder leveldb blockhandle table builder cc text undefined reference to snappy maxcompressedlength unsigned long table builder cc text undefined reference to snappy rawcompress char const unsigned long char unsigned long libleveldb a format o in function leveldb readblock leveldb randomaccessfile leveldb readoptions const leveldb blockhandle const leveldb blockcontents format cc text undefined reference to snappy getuncompressedlength char const unsigned long unsigned long format cc text undefined reference to snappy rawuncompress char const unsigned long char ld returned exit status i think may be snappy problem but i dont kown what to do original issue reported on code google com by wyy gmail com on may at
| 1
|
26,137
| 4,593,627,835
|
IssuesEvent
|
2016-09-21 02:06:47
|
afisher1/GridLAB-D
|
https://api.github.com/repos/afisher1/GridLAB-D
|
closed
|
#64 Residential_loads.glm test model has an error on Altix,
|
defect
|
ERROR: recorder 20 contains a property of meter 15 that is not found
,
|
1.0
|
#64 Residential_loads.glm test model has an error on Altix,
- ERROR: recorder 20 contains a property of meter 15 that is not found
,
|
defect
|
residential loads glm test model has an error on altix error recorder contains a property of meter that is not found
| 1
|
2,409
| 2,607,901,083
|
IssuesEvent
|
2015-02-26 00:13:35
|
chrsmithdemos/zen-coding
|
https://api.github.com/repos/chrsmithdemos/zen-coding
|
closed
|
How to use zen-coding with emacs
|
auto-migrated Priority-Medium Type-Defect
|
```
i'm a emacser. It's there a plug-in for emacs?
```
-----
Original issue reported on code.google.com by `xielingw...@gmail.com` on 3 Feb 2010 at 1:53
|
1.0
|
How to use zen-coding with emacs - ```
i'm a emacser. It's there a plug-in for emacs?
```
-----
Original issue reported on code.google.com by `xielingw...@gmail.com` on 3 Feb 2010 at 1:53
|
defect
|
how to use zen coding with emacs i m a emacser it s there a plug in for emacs original issue reported on code google com by xielingw gmail com on feb at
| 1
|
69,615
| 22,577,059,812
|
IssuesEvent
|
2022-06-28 08:19:08
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Hangs with flickering mouse
|
T-Defect
|
### Steps to reproduce
1. I run `element-desktop` or element-web via `google-chrome`. The problem happens in both of them, but it doesn't happen in element-web via `firefox`.
2. I work normally for a few minutes. Then the UI hanging issue happens. I don't know of a specific way to trigger it.
### Outcome
When the problem happens, the mouse cursor starts flickering and the whole UI is unresponsive. It's like it's in some endless loop processing mouse up/down events.
If I hold down the left mouse button the flickering stops, but again the UI is completely unresponsive.
If I switch to another window (e.g. terminal, thunderbird) and wait for 5 - 10 minutes, and I return to `element-desktop`, it seems like the endless loop is ended and it then works normally for a few minutes more, until the problem happens again.
### Operating system
Ubuntu MATE 22.04
### Application version
element-desktop 1.10.15, google-chrome-stable 103.0.5060.53-1
### How did you install the app?
https://element.io/get-started#linux-details
### Homeserver
matrix.org
### Will you send logs?
Yes
|
1.0
|
Hangs with flickering mouse - ### Steps to reproduce
1. I run `element-desktop` or element-web via `google-chrome`. The problem happens in both of them, but it doesn't happen in element-web via `firefox`.
2. I work normally for a few minutes. Then the UI hanging issue happens. I don't know of a specific way to trigger it.
### Outcome
When the problem happens, the mouse cursor starts flickering and the whole UI is unresponsive. It's like it's in some endless loop processing mouse up/down events.
If I hold down the left mouse button the flickering stops, but again the UI is completely unresponsive.
If I switch to another window (e.g. terminal, thunderbird) and wait for 5 - 10 minutes, and I return to `element-desktop`, it seems like the endless loop is ended and it then works normally for a few minutes more, until the problem happens again.
### Operating system
Ubuntu MATE 22.04
### Application version
element-desktop 1.10.15, google-chrome-stable 103.0.5060.53-1
### How did you install the app?
https://element.io/get-started#linux-details
### Homeserver
matrix.org
### Will you send logs?
Yes
|
defect
|
hangs with flickering mouse steps to reproduce i run element desktop or element web via google chrome the problem happens in both of them but it doesn t happen in element web via firefox i work normally for a few minutes then the ui hanging issue happens i don t know of a specific way to trigger it outcome when the problem happens the mouse cursor starts flickering and the whole ui is unresponsive it s like it s in some endless loop processing mouse up down events if i hold down the left mouse button the flickering stops but again the ui is completely unresponsive if i switch to another window e g terminal thunderbird and wait for minutes and i return to element desktop it seems like the endless loop is ended and it then works normally for a few minutes more until the problem happens again operating system ubuntu mate application version element desktop google chrome stable how did you install the app homeserver matrix org will you send logs yes
| 1
|
17,940
| 23,937,356,195
|
IssuesEvent
|
2022-09-11 12:26:59
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
closed
|
Add Spatial Hub as a data source
|
research data processing
|
Add https://data.spatialhub.scot/dataset/ as a source
We think this might be possibly through the existing CKAN API.
Some datasets on the Spatial Hub are already published by local authorities in their individual open data portals which has the potential to cause duplicates. For example, Angus Council’s polling districts is listed on both their [CKAN instance](http://opendata.angus.gov.uk/dataset/angus-council-polling-districts), and [IS’s Spatial Hub](https://data.spatialhub.scot/dataset/polling_districts-an). We need to have a discussion around how we tackle these instances.
- Do we consider them duplicates?
- Do we add both records to the site or do we just add one of them?
- If the latter option, which ones goes on the site?
Out of the 138 datasets on the Spatial Hub, 42 of them are licensed as “not open”, meaning it would potentially be counterproductive to list them on opendata.scot as they would be inaccessible to the vast majority of people. We need to have a discussion around this and decide whether we list these datasets regardless of license or if we filter down to datasets only with an open license. If the latter option was chosen then we would need to do some work on filtering these non-open datasets by adding a new step to our pipeline.
|
1.0
|
Add Spatial Hub as a data source - Add https://data.spatialhub.scot/dataset/ as a source
We think this might be possibly through the existing CKAN API.
Some datasets on the Spatial Hub are already published by local authorities in their individual open data portals which has the potential to cause duplicates. For example, Angus Council’s polling districts is listed on both their [CKAN instance](http://opendata.angus.gov.uk/dataset/angus-council-polling-districts), and [IS’s Spatial Hub](https://data.spatialhub.scot/dataset/polling_districts-an). We need to have a discussion around how we tackle these instances.
- Do we consider them duplicates?
- Do we add both records to the site or do we just add one of them?
- If the latter option, which ones goes on the site?
Out of the 138 datasets on the Spatial Hub, 42 of them are licensed as “not open”, meaning it would potentially be counterproductive to list them on opendata.scot as they would be inaccessible to the vast majority of people. We need to have a discussion around this and decide whether we list these datasets regardless of license or if we filter down to datasets only with an open license. If the latter option was chosen then we would need to do some work on filtering these non-open datasets by adding a new step to our pipeline.
|
non_defect
|
add spatial hub as a data source add as a source we think this might be possibly through the existing ckan api some datasets on the spatial hub are already published by local authorities in their individual open data portals which has the potential to cause duplicates for example angus council’s polling districts is listed on both their and we need to have a discussion around how we tackle these instances do we consider them duplicates do we add both records to the site or do we just add one of them if the latter option which ones goes on the site out of the datasets on the spatial hub of them are licensed as “not open” meaning it would potentially be counterproductive to list them on opendata scot as they would be inaccessible to the vast majority of people we need to have a discussion around this and decide whether we list these datasets regardless of license or if we filter down to datasets only with an open license if the latter option was chosen then we would need to do some work on filtering these non open datasets by adding a new step to our pipeline
| 0
|
308,912
| 26,637,822,923
|
IssuesEvent
|
2023-01-25 00:04:31
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix math.test_tensorflow_is_non_decreasing
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3998881327/jobs/6862091568" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/3987943815/jobs/6838427705" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3998881327/jobs/6862113884" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/3998881327/jobs/6862115304" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py::test_tensorflow_is_non_decreasing[cpu-ivy.functional.backends.tensorflow-False-False]</summary>
2023-01-24T17:43:30.2201597Z E tensorflow.python.framework.errors_impl.InvalidArgumentError: slice index 1 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/
2023-01-24T17:43:30.2203008Z E Falsifying example: test_tensorflow_is_non_decreasing(
2023-01-24T17:43:30.2204357Z E dtype_and_x=(['bfloat16'], [array([[-1, -1]], dtype=bfloat16)]),
2023-01-24T17:43:30.2206084Z E fn_tree='ivy.functional.frontends.tensorflow.math.is_non_decreasing',
2023-01-24T17:43:30.2207522Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-24T17:43:30.2208839Z E frontend='tensorflow',
2023-01-24T17:43:30.2210031Z E on_device='cpu',
2023-01-24T17:43:30.2211208Z E )
2023-01-24T17:43:30.2212287Z E
2023-01-24T17:43:30.2213766Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkYGAEQgYGCAGlkdkMAADBAAg=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py::test_tensorflow_is_non_decreasing[cpu-ivy.functional.backends.numpy-False-False]</summary>
2023-01-23T15:58:17.1299653Z E KeyError: True
2023-01-23T15:58:17.1302177Z E ivy.exceptions.IvyBackendException: numpy: as_ivy_dtype: True
2023-01-23T15:58:17.1306012Z E ivy.exceptions.IvyBackendException: numpy: is_complex_dtype: numpy: as_ivy_dtype: True
2023-01-23T15:58:17.1311491Z E ivy.exceptions.IvyBackendException: numpy: default_dtype: numpy: is_complex_dtype: numpy: as_ivy_dtype: True
2023-01-23T15:58:17.1317890Z E ivy.exceptions.IvyBackendException: numpy: asarray: numpy: default_dtype: numpy: is_complex_dtype: numpy: as_ivy_dtype: True
2023-01-23T15:58:17.1318349Z E Falsifying example: test_tensorflow_is_non_decreasing(
2023-01-23T15:58:17.1318831Z E dtype_and_x=(['float32'], [array([-1., -1.], dtype=float32)]),
2023-01-23T15:58:17.1319313Z E fn_tree='ivy.functional.frontends.tensorflow.math.is_non_decreasing',
2023-01-23T15:58:17.1319793Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-23T15:58:17.1320199Z E frontend='tensorflow',
2023-01-23T15:58:17.1320476Z E on_device='cpu',
2023-01-23T15:58:17.1320697Z E )
2023-01-23T15:58:17.1320867Z E
2023-01-23T15:58:17.1321436Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkYGQAAjABpZHZDAAAmwAH') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py::test_tensorflow_is_non_decreasing[cpu-ivy.functional.backends.tensorflow-False-False]</summary>
2023-01-24T17:43:30.2201597Z E tensorflow.python.framework.errors_impl.InvalidArgumentError: slice index 1 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/
2023-01-24T17:43:30.2203008Z E Falsifying example: test_tensorflow_is_non_decreasing(
2023-01-24T17:43:30.2204357Z E dtype_and_x=(['bfloat16'], [array([[-1, -1]], dtype=bfloat16)]),
2023-01-24T17:43:30.2206084Z E fn_tree='ivy.functional.frontends.tensorflow.math.is_non_decreasing',
2023-01-24T17:43:30.2207522Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-24T17:43:30.2208839Z E frontend='tensorflow',
2023-01-24T17:43:30.2210031Z E on_device='cpu',
2023-01-24T17:43:30.2211208Z E )
2023-01-24T17:43:30.2212287Z E
2023-01-24T17:43:30.2213766Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkYGAEQgYGCAGlkdkMAADBAAg=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py::test_tensorflow_is_non_decreasing[cpu-ivy.functional.backends.tensorflow-False-False]</summary>
2023-01-24T17:43:30.2201597Z E tensorflow.python.framework.errors_impl.InvalidArgumentError: slice index 1 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/
2023-01-24T17:43:30.2203008Z E Falsifying example: test_tensorflow_is_non_decreasing(
2023-01-24T17:43:30.2204357Z E dtype_and_x=(['bfloat16'], [array([[-1, -1]], dtype=bfloat16)]),
2023-01-24T17:43:30.2206084Z E fn_tree='ivy.functional.frontends.tensorflow.math.is_non_decreasing',
2023-01-24T17:43:30.2207522Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-24T17:43:30.2208839Z E frontend='tensorflow',
2023-01-24T17:43:30.2210031Z E on_device='cpu',
2023-01-24T17:43:30.2211208Z E )
2023-01-24T17:43:30.2212287Z E
2023-01-24T17:43:30.2213766Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkYGAEQgYGCAGlkdkMAADBAAg=') as a decorator on your test case
</details>
|
1.0
|
Fix math.test_tensorflow_is_non_decreasing - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3998881327/jobs/6862091568" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/3987943815/jobs/6838427705" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3998881327/jobs/6862113884" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/3998881327/jobs/6862115304" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py::test_tensorflow_is_non_decreasing[cpu-ivy.functional.backends.tensorflow-False-False]</summary>
2023-01-24T17:43:30.2201597Z E tensorflow.python.framework.errors_impl.InvalidArgumentError: slice index 1 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/
2023-01-24T17:43:30.2203008Z E Falsifying example: test_tensorflow_is_non_decreasing(
2023-01-24T17:43:30.2204357Z E dtype_and_x=(['bfloat16'], [array([[-1, -1]], dtype=bfloat16)]),
2023-01-24T17:43:30.2206084Z E fn_tree='ivy.functional.frontends.tensorflow.math.is_non_decreasing',
2023-01-24T17:43:30.2207522Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-24T17:43:30.2208839Z E frontend='tensorflow',
2023-01-24T17:43:30.2210031Z E on_device='cpu',
2023-01-24T17:43:30.2211208Z E )
2023-01-24T17:43:30.2212287Z E
2023-01-24T17:43:30.2213766Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkYGAEQgYGCAGlkdkMAADBAAg=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py::test_tensorflow_is_non_decreasing[cpu-ivy.functional.backends.numpy-False-False]</summary>
2023-01-23T15:58:17.1299653Z E KeyError: True
2023-01-23T15:58:17.1302177Z E ivy.exceptions.IvyBackendException: numpy: as_ivy_dtype: True
2023-01-23T15:58:17.1306012Z E ivy.exceptions.IvyBackendException: numpy: is_complex_dtype: numpy: as_ivy_dtype: True
2023-01-23T15:58:17.1311491Z E ivy.exceptions.IvyBackendException: numpy: default_dtype: numpy: is_complex_dtype: numpy: as_ivy_dtype: True
2023-01-23T15:58:17.1317890Z E ivy.exceptions.IvyBackendException: numpy: asarray: numpy: default_dtype: numpy: is_complex_dtype: numpy: as_ivy_dtype: True
2023-01-23T15:58:17.1318349Z E Falsifying example: test_tensorflow_is_non_decreasing(
2023-01-23T15:58:17.1318831Z E dtype_and_x=(['float32'], [array([-1., -1.], dtype=float32)]),
2023-01-23T15:58:17.1319313Z E fn_tree='ivy.functional.frontends.tensorflow.math.is_non_decreasing',
2023-01-23T15:58:17.1319793Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-23T15:58:17.1320199Z E frontend='tensorflow',
2023-01-23T15:58:17.1320476Z E on_device='cpu',
2023-01-23T15:58:17.1320697Z E )
2023-01-23T15:58:17.1320867Z E
2023-01-23T15:58:17.1321436Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkYGQAAjABpZHZDAAAmwAH') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py::test_tensorflow_is_non_decreasing[cpu-ivy.functional.backends.tensorflow-False-False]</summary>
2023-01-24T17:43:30.2201597Z E tensorflow.python.framework.errors_impl.InvalidArgumentError: slice index 1 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/
2023-01-24T17:43:30.2203008Z E Falsifying example: test_tensorflow_is_non_decreasing(
2023-01-24T17:43:30.2204357Z E dtype_and_x=(['bfloat16'], [array([[-1, -1]], dtype=bfloat16)]),
2023-01-24T17:43:30.2206084Z E fn_tree='ivy.functional.frontends.tensorflow.math.is_non_decreasing',
2023-01-24T17:43:30.2207522Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-24T17:43:30.2208839Z E frontend='tensorflow',
2023-01-24T17:43:30.2210031Z E on_device='cpu',
2023-01-24T17:43:30.2211208Z E )
2023-01-24T17:43:30.2212287Z E
2023-01-24T17:43:30.2213766Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkYGAEQgYGCAGlkdkMAADBAAg=') as a decorator on your test case
</details>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py::test_tensorflow_is_non_decreasing[cpu-ivy.functional.backends.tensorflow-False-False]</summary>
2023-01-24T17:43:30.2201597Z E tensorflow.python.framework.errors_impl.InvalidArgumentError: slice index 1 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/
2023-01-24T17:43:30.2203008Z E Falsifying example: test_tensorflow_is_non_decreasing(
2023-01-24T17:43:30.2204357Z E dtype_and_x=(['bfloat16'], [array([[-1, -1]], dtype=bfloat16)]),
2023-01-24T17:43:30.2206084Z E fn_tree='ivy.functional.frontends.tensorflow.math.is_non_decreasing',
2023-01-24T17:43:30.2207522Z E test_flags=num_positional_args=0. with_out=False. inplace=False. native_arrays=[False]. as_variable=[False]. ,
2023-01-24T17:43:30.2208839Z E frontend='tensorflow',
2023-01-24T17:43:30.2210031Z E on_device='cpu',
2023-01-24T17:43:30.2211208Z E )
2023-01-24T17:43:30.2212287Z E
2023-01-24T17:43:30.2213766Z E You can reproduce this example by temporarily adding @reproduce_failure('6.55.0', b'AXicY2BkYGAEQgYGCAGlkdkMAADBAAg=') as a decorator on your test case
</details>
|
non_defect
|
fix math test tensorflow is non decreasing tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test frontends test tensorflow test math py test tensorflow is non decreasing e tensorflow python framework errors impl invalidargumenterror slice index of dimension out of bounds name strided slice e falsifying example test tensorflow is non decreasing e dtype and x dtype e fn tree ivy functional frontends tensorflow math is non decreasing e test flags num positional args with out false inplace false native arrays as variable e frontend tensorflow e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test tensorflow test math py test tensorflow is non decreasing e keyerror true e ivy exceptions ivybackendexception numpy as ivy dtype true e ivy exceptions ivybackendexception numpy is complex dtype numpy as ivy dtype true e ivy exceptions ivybackendexception numpy default dtype numpy is complex dtype numpy as ivy dtype true e ivy exceptions ivybackendexception numpy asarray numpy default dtype numpy is complex dtype numpy as ivy dtype true e falsifying example test tensorflow is non decreasing e dtype and x dtype e fn tree ivy functional frontends tensorflow math is non decreasing e test flags num positional args with out false inplace false native arrays as variable e frontend tensorflow e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test tensorflow test math py test tensorflow is non decreasing e tensorflow python framework errors impl invalidargumenterror slice index of dimension out of bounds name strided slice e falsifying example test tensorflow is non decreasing e dtype and x dtype e fn tree ivy functional frontends tensorflow math is non decreasing e test flags num positional args with out false inplace false native arrays as variable e frontend tensorflow e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case failed ivy tests test ivy test frontends test tensorflow test math py test tensorflow is non decreasing e tensorflow python framework errors impl invalidargumenterror slice index of dimension out of bounds name strided slice e falsifying example test tensorflow is non decreasing e dtype and x dtype e fn tree ivy functional frontends tensorflow math is non decreasing e test flags num positional args with out false inplace false native arrays as variable e frontend tensorflow e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case
| 0
|
86,127
| 10,474,614,192
|
IssuesEvent
|
2019-09-23 14:48:56
|
microsoft/cobalt
|
https://api.github.com/repos/microsoft/cobalt
|
closed
|
Provide the Cobalt App-Dev community examples to help them get started
|
documentation effort - large
|
While helping myself and others to adopt Cobalt, I've felt a need to write an article, or series of articles, that lays out the journey from 'before Cobalt' to 'after Cobalt' for a few typical web apps and micro-service patterns.
Cobalt is under development, but I think it's at a point where such articles would be stable and help others to find their way forward with Cobalt.
### Description
As a Cobalt contributor, I'd like to craft what I've learned into some written stories that will help others to understand Cobalt, leverage Cobalt, and contribute to Cobalt.
### Acceptance Criteria
One, or more, articles that explain how to build contemporary applications whose infrastructure is managed via Cobalt. Such articles would either need to explain Cobalt, or describe how "advocated patterns," or "templates," are leveraged (preferably across a set of applications).
The stories need to be short enough to be appealing to read (quickly) and just detailed enough to provide a working relationship between the reader and Cobalt.
### Tasks
- [ ] enumerate existing usages of Cobalt, and derive story ideas from them.
- [ ] write some outlines / quick 'sketches' of some of usage patterns, then have the team of contributors and users provide feedback and direction (and writing assistance)
- [ ] write stories/articles!
Again, I'd like to keep these short, but valuable.
|
1.0
|
Provide the Cobalt App-Dev community examples to help them get started - While helping myself and others to adopt Cobalt, I've felt a need to write an article, or series of articles, that lays out the journey from 'before Cobalt' to 'after Cobalt' for a few typical web apps and micro-service patterns.
Cobalt is under development, but I think it's at a point where such articles would be stable and help others to find their way forward with Cobalt.
### Description
As a Cobalt contributor, I'd like to craft what I've learned into some written stories that will help others to understand Cobalt, leverage Cobalt, and contribute to Cobalt.
### Acceptance Criteria
One, or more, articles that explain how to build contemporary applications whose infrastructure is managed via Cobalt. Such articles would either need to explain Cobalt, or describe how "advocated patterns," or "templates," are leveraged (preferably across a set of applications).
The stories need to be short enough to be appealing to read (quickly) and just detailed enough to provide a working relationship between the reader and Cobalt.
### Tasks
- [ ] enumerate existing usages of Cobalt, and derive story ideas from them.
- [ ] write some outlines / quick 'sketches' of some of usage patterns, then have the team of contributors and users provide feedback and direction (and writing assistance)
- [ ] write stories/articles!
Again, I'd like to keep these short, but valuable.
|
non_defect
|
provide the cobalt app dev community examples to help them get started while helping myself and others to adopt cobalt i ve felt a need to write an article or series of articles that lays out the journey from before cobalt to after cobalt for a few typical web apps and micro service patterns cobalt is under development but i think it s at a point where such articles would be stable and help others to find their way forward with cobalt description as a cobalt contributor i d like to craft what i ve learned into some written stories that will help others to understand cobalt leverage cobalt and contribute to cobalt acceptance criteria one or more articles that explain how to build contemporary applications whose infrastructure is managed via cobalt such articles would either need to explain cobalt or describe how advocated patterns or templates are leveraged preferably across a set of applications the stories need to be short enough to be appealing to read quickly and just detailed enough to provide a working relationship between the reader and cobalt tasks enumerate existing usages of cobalt and derive story ideas from them write some outlines quick sketches of some of usage patterns then have the team of contributors and users provide feedback and direction and writing assistance write stories articles again i d like to keep these short but valuable
| 0
|
17,299
| 2,998,204,563
|
IssuesEvent
|
2015-07-23 12:53:34
|
bardsoftware/ganttproject
|
https://api.github.com/repos/bardsoftware/ganttproject
|
closed
|
Change holiday does not adjust dependent tasks
|
auto-migrated GanttChart Type-Defect __Target-Ostrava
|
```
What steps will reproduce the problem?
1.Create a set of tasks that are sequentially dependent on each other.
2.Change a day within a task to a holiday. Observe dependent tasks do not time
ripple.
3.Save and reload. The dependent tasks are now adjusted properly.
What is the expected output? What do you see instead?
When the holdiay is added then any event spanning the holiday is adjusted, but
dependent tasks do not move until the project is saved and reloaded.
What version of the product are you using? On what operating system?
2.7 build 1891 on Windows
Please provide any additional information below.
```
Original issue reported on code.google.com by `kurtbutf...@gmail.com` on 11 Feb 2015 at 10:51
Attachments:
* [Initial.PNG](https://storage.googleapis.com/google-code-attachments/ganttproject/issue-1080/comment-0/Initial.PNG)
* [PostHoliday.PNG](https://storage.googleapis.com/google-code-attachments/ganttproject/issue-1080/comment-0/PostHoliday.PNG)
* [PostReload.PNG](https://storage.googleapis.com/google-code-attachments/ganttproject/issue-1080/comment-0/PostReload.PNG)
|
1.0
|
Change holiday does not adjust dependent tasks - ```
What steps will reproduce the problem?
1.Create a set of tasks that are sequentially dependent on each other.
2.Change a day within a task to a holiday. Observe dependent tasks do not time
ripple.
3.Save and reload. The dependent tasks are now adjusted properly.
What is the expected output? What do you see instead?
When the holdiay is added then any event spanning the holiday is adjusted, but
dependent tasks do not move until the project is saved and reloaded.
What version of the product are you using? On what operating system?
2.7 build 1891 on Windows
Please provide any additional information below.
```
Original issue reported on code.google.com by `kurtbutf...@gmail.com` on 11 Feb 2015 at 10:51
Attachments:
* [Initial.PNG](https://storage.googleapis.com/google-code-attachments/ganttproject/issue-1080/comment-0/Initial.PNG)
* [PostHoliday.PNG](https://storage.googleapis.com/google-code-attachments/ganttproject/issue-1080/comment-0/PostHoliday.PNG)
* [PostReload.PNG](https://storage.googleapis.com/google-code-attachments/ganttproject/issue-1080/comment-0/PostReload.PNG)
|
defect
|
change holiday does not adjust dependent tasks what steps will reproduce the problem create a set of tasks that are sequentially dependent on each other change a day within a task to a holiday observe dependent tasks do not time ripple save and reload the dependent tasks are now adjusted properly what is the expected output what do you see instead when the holdiay is added then any event spanning the holiday is adjusted but dependent tasks do not move until the project is saved and reloaded what version of the product are you using on what operating system build on windows please provide any additional information below original issue reported on code google com by kurtbutf gmail com on feb at attachments
| 1
|
350,977
| 25,008,625,167
|
IssuesEvent
|
2022-11-03 13:46:35
|
USGS-R/drb-inland-salinity-ml
|
https://api.github.com/repos/USGS-R/drb-inland-salinity-ml
|
closed
|
Update README with S3 + review instructions
|
documentation
|
README
- Review on Tallgrass within the review- directory
- targets version built on 0.11.0.
- Need to have saml2aws set up and run saml2aws configure
|
1.0
|
Update README with S3 + review instructions - README
- Review on Tallgrass within the review- directory
- targets version built on 0.11.0.
- Need to have saml2aws set up and run saml2aws configure
|
non_defect
|
update readme with review instructions readme review on tallgrass within the review directory targets version built on need to have set up and run configure
| 0
|
15,304
| 2,850,604,448
|
IssuesEvent
|
2015-05-31 18:26:26
|
damonkohler/android-scripting
|
https://api.github.com/repos/damonkohler/android-scripting
|
closed
|
WebCamFacade APIs does not working on Android 2.1 devices
|
auto-migrated Priority-Medium Type-Defect
|
```
What device(s) are you experiencing the problem on?
Motorola Spice XT-300
What firmware version are you running on the device?
Android 2.1
What steps will reproduce the problem?
1. Any WebCamFacade API
2. import android
droid = android.Android()
3. droid.cameraStartPreview(0,20,"/sdcard/sl4a/jpg/")
What is the expected output? What do you see instead?
Unknown RPC
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `mike...@gmail.com` on 6 Nov 2011 at 2:57
|
1.0
|
WebCamFacade APIs does not working on Android 2.1 devices - ```
What device(s) are you experiencing the problem on?
Motorola Spice XT-300
What firmware version are you running on the device?
Android 2.1
What steps will reproduce the problem?
1. Any WebCamFacade API
2. import android
droid = android.Android()
3. droid.cameraStartPreview(0,20,"/sdcard/sl4a/jpg/")
What is the expected output? What do you see instead?
Unknown RPC
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `mike...@gmail.com` on 6 Nov 2011 at 2:57
|
defect
|
webcamfacade apis does not working on android devices what device s are you experiencing the problem on motorola spice xt what firmware version are you running on the device android what steps will reproduce the problem any webcamfacade api import android droid android android droid camerastartpreview sdcard jpg what is the expected output what do you see instead unknown rpc what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by mike gmail com on nov at
| 1
|
50,862
| 13,187,913,475
|
IssuesEvent
|
2020-08-13 05:00:53
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
Summary of remaining static analysis Issues (Trac #1549)
|
Migrated from Trac cmake defect
|
== Code that is clearly producing incorrect results ==
`CascadeVariables` returns uninitilized data see bug #1529
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-7461e5.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-bb648a.html#EndPath
== Code that need improvement ==
Code assumes that the file being read starts with a `#` and contains the number of strings
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-6b756c.html#EndPath
code assumes 8 parameters when the number of parameters is a variable
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1e32f9.html#EndPath
code path is confusing and has lots of `#ifdefs`
toprec/private/toprec/laputop/I3LaputopParametrization.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1a61fb.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-cfabfc.html#EndPath
== Code I gave up trying to understand the problem==
PROPOSAL/private/PROPOSAL/Amanda.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-19a21f.html#EndPath
VHESelfVeto/private/clipper/clipper.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-47fef5.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-da4893.html#EndPath
fill-ratio/private/fill-ratio/I3FillRatioLite.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-146975.html#EndPath
icetray/private/icetray/I3Tray.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-fe2e6e.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-873438.html#EndPath
ipdf/private/pybindings/Likelihood.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-6148e1.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-7d3a26.html#EndPath
mue/private/mue/llhreco.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-b0852c.html#EndPath
dead assignment is the result of `decode()` but i can't tell if `decode()` has side effects or not
payload-parsing/private/payload-parsing/dump-raw-deltacompressed.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-c9677b.html#EndPath
tpx/private/tpx/I3IceTopBaselineModule.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-aaed35.html#EndPath
== copy and paste files with too many problems to list ==
g4-tankresponse/private/g4-tankresponse/triangle/triangle.c
lilliput/private/minimizer/minuit/TMinuit1.cxx
== Memory leaks ==
g4-tankresponse/private/g4-tankresponse/g4classes/G4IceTopDetectorConstruction.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-5704c1.html#EndPath
g4-tankresponse/private/g4-tankresponse/g4classes/G4IceTopTank.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-271865.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-4e6179.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-e17000.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-e87022.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-f536c8.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-f71588.html#EndPath
mue/private/mue/I3mue.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-d05444.html#EndPath
== False Positives ==
complains about an impossible code branch
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-141e17.html#EndPath
clang confuses unix's `open` with `I3::dataio::open` see #1536
clsim/private/make_safeprimes/convert.cxx
dataio/private/dataio/I3MultiWriter.cxx
dataio/private/dataio/I3Writer.cxx
dataio/private/pybindings/I3File.cxx
dataio/private/shovel/Model.cxx
dataio/private/shovel/Model.cxx
dataio/private/shovel/Model.cxx
steamshovel/private/steamshovel/FileService.cpp
clang dosn't understand `G4Exception()`
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1d0a97.html#EndPath
clang doesn't understand Qt memory management see #1537
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-4ca3c9.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-5c5809.html#EndPath
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1549">https://code.icecube.wisc.edu/ticket/1549</a>, reported by kjmeagher and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2017-04-24T08:39:36",
"description": "\n== Code that is clearly producing incorrect results ==\n`CascadeVariables` returns uninitilized data see bug #1537\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-7461e5.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-bb648a.html#EndPath\n\n== Code that need improvement ==\n\nCode assumes that the file being read starts with a `#` and contains the number of strings \nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-6b756c.html#EndPath\n\ncode assumes 8 parameters when the number of parameters is a variable\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1e32f9.html#EndPath\n\ncode path is confusing and has lots of `#ifdefs`\ntoprec/private/toprec/laputop/I3LaputopParametrization.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1a61fb.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-cfabfc.html#EndPath\n\n== Code I gave up trying to understand the problem==\nPROPOSAL/private/PROPOSAL/Amanda.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-19a21f.html#EndPath\n\nVHESelfVeto/private/clipper/clipper.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-47fef5.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-da4893.html#EndPath\n\nfill-ratio/private/fill-ratio/I3FillRatioLite.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-146975.html#EndPath\n\nicetray/private/icetray/I3Tray.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-fe2e6e.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-873438.html#EndPath\n\nipdf/private/pybindings/Likelihood.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-6148e1.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-7d3a26.html#EndPath\n\nmue/private/mue/llhreco.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-b0852c.html#EndPath\n\ndead assignment is the result of `decode()` but i can't tell if `decode()` has side effects or not\npayload-parsing/private/payload-parsing/dump-raw-deltacompressed.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-c9677b.html#EndPath\n\n\n\ntpx/private/tpx/I3IceTopBaselineModule.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-aaed35.html#EndPath\n\n\n== copy and paste files with too many problems to list ==\n\ng4-tankresponse/private/g4-tankresponse/triangle/triangle.c\nlilliput/private/minimizer/minuit/TMinuit1.cxx\n\n== Memory leaks ==\ng4-tankresponse/private/g4-tankresponse/g4classes/G4IceTopDetectorConstruction.cxx\t\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-5704c1.html#EndPath\ng4-tankresponse/private/g4-tankresponse/g4classes/G4IceTopTank.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-271865.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-4e6179.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-e17000.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-e87022.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-f536c8.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-f71588.html#EndPath\n\nmue/private/mue/I3mue.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-d05444.html#EndPath\n\n== False Positives ==\ncomplains about an impossible code branch\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-141e17.html#EndPath\n\nclang confuses unix's `open` with `I3::dataio::open` see #1544\nclsim/private/make_safeprimes/convert.cxx\ndataio/private/dataio/I3MultiWriter.cxx\ndataio/private/dataio/I3Writer.cxx\ndataio/private/pybindings/I3File.cxx\ndataio/private/shovel/Model.cxx\ndataio/private/shovel/Model.cxx\ndataio/private/shovel/Model.cxx\nsteamshovel/private/steamshovel/FileService.cpp\n\nclang dosn't understand `G4Exception()`\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1d0a97.html#EndPath\n\nclang doesn't understand Qt memory management see #1545\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-4ca3c9.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-5c5809.html#EndPath",
"reporter": "kjmeagher",
"cc": "david.schultz, nega, olivas",
"resolution": "fixed",
"_ts": "1493023176837564",
"component": "cmake",
"summary": "Summary of remaining static analysis Issues",
"priority": "normal",
"keywords": "",
"time": "2016-02-15T10:27:59",
"milestone": "Long-Term Future",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Summary of remaining static analysis Issues (Trac #1549) -
== Code that is clearly producing incorrect results ==
`CascadeVariables` returns uninitilized data see bug #1529
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-7461e5.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-bb648a.html#EndPath
== Code that need improvement ==
Code assumes that the file being read starts with a `#` and contains the number of strings
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-6b756c.html#EndPath
code assumes 8 parameters when the number of parameters is a variable
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1e32f9.html#EndPath
code path is confusing and has lots of `#ifdefs`
toprec/private/toprec/laputop/I3LaputopParametrization.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1a61fb.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-cfabfc.html#EndPath
== Code I gave up trying to understand the problem==
PROPOSAL/private/PROPOSAL/Amanda.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-19a21f.html#EndPath
VHESelfVeto/private/clipper/clipper.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-47fef5.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-da4893.html#EndPath
fill-ratio/private/fill-ratio/I3FillRatioLite.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-146975.html#EndPath
icetray/private/icetray/I3Tray.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-fe2e6e.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-873438.html#EndPath
ipdf/private/pybindings/Likelihood.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-6148e1.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-7d3a26.html#EndPath
mue/private/mue/llhreco.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-b0852c.html#EndPath
dead assignment is the result of `decode()` but i can't tell if `decode()` has side effects or not
payload-parsing/private/payload-parsing/dump-raw-deltacompressed.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-c9677b.html#EndPath
tpx/private/tpx/I3IceTopBaselineModule.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-aaed35.html#EndPath
== copy and paste files with too many problems to list ==
g4-tankresponse/private/g4-tankresponse/triangle/triangle.c
lilliput/private/minimizer/minuit/TMinuit1.cxx
== Memory leaks ==
g4-tankresponse/private/g4-tankresponse/g4classes/G4IceTopDetectorConstruction.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-5704c1.html#EndPath
g4-tankresponse/private/g4-tankresponse/g4classes/G4IceTopTank.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-271865.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-4e6179.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-e17000.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-e87022.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-f536c8.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-f71588.html#EndPath
mue/private/mue/I3mue.cxx
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-d05444.html#EndPath
== False Positives ==
complains about an impossible code branch
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-141e17.html#EndPath
clang confuses unix's `open` with `I3::dataio::open` see #1536
clsim/private/make_safeprimes/convert.cxx
dataio/private/dataio/I3MultiWriter.cxx
dataio/private/dataio/I3Writer.cxx
dataio/private/pybindings/I3File.cxx
dataio/private/shovel/Model.cxx
dataio/private/shovel/Model.cxx
dataio/private/shovel/Model.cxx
steamshovel/private/steamshovel/FileService.cpp
clang dosn't understand `G4Exception()`
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1d0a97.html#EndPath
clang doesn't understand Qt memory management see #1537
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-4ca3c9.html#EndPath
http://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-5c5809.html#EndPath
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1549">https://code.icecube.wisc.edu/ticket/1549</a>, reported by kjmeagher and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2017-04-24T08:39:36",
"description": "\n== Code that is clearly producing incorrect results ==\n`CascadeVariables` returns uninitilized data see bug #1537\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-7461e5.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-bb648a.html#EndPath\n\n== Code that need improvement ==\n\nCode assumes that the file being read starts with a `#` and contains the number of strings \nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-6b756c.html#EndPath\n\ncode assumes 8 parameters when the number of parameters is a variable\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1e32f9.html#EndPath\n\ncode path is confusing and has lots of `#ifdefs`\ntoprec/private/toprec/laputop/I3LaputopParametrization.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1a61fb.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-cfabfc.html#EndPath\n\n== Code I gave up trying to understand the problem==\nPROPOSAL/private/PROPOSAL/Amanda.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-19a21f.html#EndPath\n\nVHESelfVeto/private/clipper/clipper.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-47fef5.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-da4893.html#EndPath\n\nfill-ratio/private/fill-ratio/I3FillRatioLite.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-146975.html#EndPath\n\nicetray/private/icetray/I3Tray.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-fe2e6e.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-873438.html#EndPath\n\nipdf/private/pybindings/Likelihood.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-6148e1.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-7d3a26.html#EndPath\n\nmue/private/mue/llhreco.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-b0852c.html#EndPath\n\ndead assignment is the result of `decode()` but i can't tell if `decode()` has side effects or not\npayload-parsing/private/payload-parsing/dump-raw-deltacompressed.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-c9677b.html#EndPath\n\n\n\ntpx/private/tpx/I3IceTopBaselineModule.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-aaed35.html#EndPath\n\n\n== copy and paste files with too many problems to list ==\n\ng4-tankresponse/private/g4-tankresponse/triangle/triangle.c\nlilliput/private/minimizer/minuit/TMinuit1.cxx\n\n== Memory leaks ==\ng4-tankresponse/private/g4-tankresponse/g4classes/G4IceTopDetectorConstruction.cxx\t\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-5704c1.html#EndPath\ng4-tankresponse/private/g4-tankresponse/g4classes/G4IceTopTank.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-271865.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-4e6179.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-e17000.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-e87022.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-f536c8.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-f71588.html#EndPath\n\nmue/private/mue/I3mue.cxx\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-d05444.html#EndPath\n\n== False Positives ==\ncomplains about an impossible code branch\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-141e17.html#EndPath\n\nclang confuses unix's `open` with `I3::dataio::open` see #1544\nclsim/private/make_safeprimes/convert.cxx\ndataio/private/dataio/I3MultiWriter.cxx\ndataio/private/dataio/I3Writer.cxx\ndataio/private/pybindings/I3File.cxx\ndataio/private/shovel/Model.cxx\ndataio/private/shovel/Model.cxx\ndataio/private/shovel/Model.cxx\nsteamshovel/private/steamshovel/FileService.cpp\n\nclang dosn't understand `G4Exception()`\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-1d0a97.html#EndPath\n\nclang doesn't understand Qt memory management see #1545\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-4ca3c9.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-02-14-030225-7689-1/report-5c5809.html#EndPath",
"reporter": "kjmeagher",
"cc": "david.schultz, nega, olivas",
"resolution": "fixed",
"_ts": "1493023176837564",
"component": "cmake",
"summary": "Summary of remaining static analysis Issues",
"priority": "normal",
"keywords": "",
"time": "2016-02-15T10:27:59",
"milestone": "Long-Term Future",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
summary of remaining static analysis issues trac code that is clearly producing incorrect results cascadevariables returns uninitilized data see bug code that need improvement code assumes that the file being read starts with a and contains the number of strings code assumes parameters when the number of parameters is a variable code path is confusing and has lots of ifdefs toprec private toprec laputop cxx code i gave up trying to understand the problem proposal private proposal amanda cxx vheselfveto private clipper clipper cxx fill ratio private fill ratio cxx icetray private icetray cxx ipdf private pybindings likelihood cxx mue private mue llhreco cxx dead assignment is the result of decode but i can t tell if decode has side effects or not payload parsing private payload parsing dump raw deltacompressed cxx tpx private tpx cxx copy and paste files with too many problems to list tankresponse private tankresponse triangle triangle c lilliput private minimizer minuit cxx memory leaks tankresponse private tankresponse cxx tankresponse private tankresponse cxx mue private mue cxx false positives complains about an impossible code branch clang confuses unix s open with dataio open see clsim private make safeprimes convert cxx dataio private dataio cxx dataio private dataio cxx dataio private pybindings cxx dataio private shovel model cxx dataio private shovel model cxx dataio private shovel model cxx steamshovel private steamshovel fileservice cpp clang dosn t understand clang doesn t understand qt memory management see migrated from json status closed changetime description n code that is clearly producing incorrect results n cascadevariables returns uninitilized data see bug n code that need improvement n ncode assumes that the file being read starts with a and contains the number of strings n assumes parameters when the number of parameters is a variable n path is confusing and has lots of ifdefs ntoprec private toprec laputop cxx n code i gave up trying to understand the problem nproposal private proposal amanda cxx n assignment is the result of decode but i can t tell if decode has side effects or not npayload parsing private payload parsing dump raw deltacompressed cxx n copy and paste files with too many problems to list n tankresponse private tankresponse triangle triangle c nlilliput private minimizer minuit cxx n n memory leaks tankresponse private tankresponse cxx t n false positives ncomplains about an impossible code branch n confuses unix s open with dataio open see nclsim private make safeprimes convert cxx ndataio private dataio cxx ndataio private dataio cxx ndataio private pybindings cxx ndataio private shovel model cxx ndataio private shovel model cxx ndataio private shovel model cxx nsteamshovel private steamshovel fileservice cpp n nclang dosn t understand n doesn t understand qt memory management see n reporter kjmeagher cc david schultz nega olivas resolution fixed ts component cmake summary summary of remaining static analysis issues priority normal keywords time milestone long term future owner nega type defect
| 1
|
75,629
| 25,961,302,618
|
IssuesEvent
|
2022-12-18 23:08:04
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: [Grid] Sessionslot not released if create session fails
|
C-grid R-awaiting answer I-defect
|
### What happened?
Hello,
I've configured a grid v4 as follows:
```
HUB --> NODE with Relay --> Appium --> Android emulator
```
I try to create a session on the android device
On the first try, appium is correctly called, but fails creating a session (whatever the reason, the important point is that it fails on first call. In my case, this is because I request the chrome browser without providing the right driver file)
So, if I correctly understand the behaviour, Distributor tries to recreate the session.
But this time, it fails quickly
On hub, logs are (repeated several times per second):
```
No slots found for request 23cabc3f-c004-4e28-92d7-37b291183abd and capabilities Capabilities {appium:automationName: Appium, appium:newCommandTimeout: 120, appium:platformVersion: 11, browserName: chrome, goog:chromeOptions: {args: [--disable-translate, --disable-web-security, --disable-site-isolation-tr..., --disable-features=IsolateO...], extensions: []}, nativeWebScreenshot: true, pageLoadStrategy: normal, platformName: ANDROID}
12:01:49.131 INFO [LocalDistributor.newSession] - Unable to find a free slot for request 23cabc3f-c004-4e28-92d7-37b291183abd.
```
On node, logs are (repeated several times per second)
```
12:04:39.910 INFO [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
POST /se/grid/node/session HTTP/1.1
User-Agent: selenium/4.7.2 (java windows)
X-REGISTRATION-SECRET:
traceparent: 00-eee5ea07c6ca0e888db260440fc0a90f-639f321f239d9edb-01
transfer-encoding: chunked
host: 127.0.0.1:5555
accept: */*
12:04:39.910 INFO [RequestConverter.channelRead0] - Incoming message: DefaultHttpContent(data: PooledSlicedByteBuf(ridx: 0, widx: 604, cap: 604/604, unwrapped: PooledUnsafeDirectByteBuf(ridx: 851, widx: 851, cap: 8192)), decoderResult: success)
12:04:39.910 INFO [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
12:04:39.910 INFO [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
12:04:39.913 WARN [SeleniumSpanExporter$1.lambda$export$3] - {"traceId": "eee5ea07c6ca0e888db260440fc0a90f","eventTime": 1671015879910983900,"eventName": "No slot matched the requested capabilities. ","attributes": {"current.session.count": 0,"logger": "org.openqa.selenium.grid.node.local.LocalNode","session.request.capabilities": "Capabilities {appium:automationName: Appium, appium:newCommandTimeout: 120, appium:platformVersion: 11, browserName: chrome, goog:chromeOptions: {args: [--disable-translate, --disable-web-security, --disable-site-isolation-tr..., --disable-features=IsolateO...], extensions: []}, nativeWebScreenshot: true, pageLoadStrategy: normal, platformName: ANDROID}","session.request.downstreamdialect": "[W3C, OSS]"}}
```
The problem seems to be that, in LocalNode.newSession() method, when node checks for the appium slot to be available, it's seen as unavailable, as if it has not been released after the first failure
Hub/node loops on retry until session-request-timeout
As a side note, I've also seen this behaviour some times with a simple browser session, but could not reproduce this time
### How can we reproduce the issue?
```shell
No script just the above setup
Node TOML Configuration
-----------------------
[node]
detect-drivers = false
[relay]
url = "http://localhost:4723/wd/hub"
status-endpoint = "/status"
configs = ["1","{\"browserName\":\"chrome\",\"appium:deviceName\":\"sdk_gphone_x86_64\",\"platformName\":\"ANDROID\",\"appium:platformVersion\":\"11\"}"]
```
### Relevant log output
```shell
12:04:39.910 INFO [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
POST /se/grid/node/session HTTP/1.1
User-Agent: selenium/4.7.2 (java windows)
X-REGISTRATION-SECRET:
traceparent: 00-eee5ea07c6ca0e888db260440fc0a90f-639f321f239d9edb-01
transfer-encoding: chunked
host: 127.0.0.1:5555
accept: */*
12:04:39.910 INFO [RequestConverter.channelRead0] - Incoming message: DefaultHttpContent(data: PooledSlicedByteBuf(ridx: 0, widx: 604, cap: 604/604, unwrapped: PooledUnsafeDirectByteBuf(ridx: 851, widx: 851, cap: 8192)), decoderResult: success)
12:04:39.910 INFO [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
12:04:39.910 INFO [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
12:04:39.913 WARN [SeleniumSpanExporter$1.lambda$export$3] - {"traceId": "eee5ea07c6ca0e888db260440fc0a90f","eventTime": 1671015879910983900,"eventName": "No slot matched the requested capabilities. ","attributes": {"current.session.count": 0,"logger": "org.openqa.selenium.grid.node.local.LocalNode","session.request.capabilities": "Capabilities {appium:automationName: Appium, appium:newCommandTimeout: 120, appium:platformVersion: 11, browserName: chrome, goog:chromeOptions: {args: [--disable-translate, --disable-web-security, --disable-site-isolation-tr..., --disable-features=IsolateO...], extensions: []}, nativeWebScreenshot: true, pageLoadStrategy: normal, platformName: ANDROID}","session.request.downstreamdialect": "[W3C, OSS]"}}
```
### Operating System
Windows 10
### Selenium version
Java => Selenium 4.7.2 on grid
### What are the browser(s) and version(s) where you see this issue?
Android emulator / appium
### What are the browser driver(s) and version(s) where you see this issue?
Appium 1.22.3
### Are you using Selenium Grid?
4.7.2
|
1.0
|
[🐛 Bug]: [Grid] Sessionslot not released if create session fails - ### What happened?
Hello,
I've configured a grid v4 as follows:
```
HUB --> NODE with Relay --> Appium --> Android emulator
```
I try to create a session on the android device
On the first try, appium is correctly called, but fails creating a session (whatever the reason, the important point is that it fails on first call. In my case, this is because I request the chrome browser without providing the right driver file)
So, if I correctly understand the behaviour, Distributor tries to recreate the session.
But this time, it fails quickly
On hub, logs are (repeated several times per second):
```
No slots found for request 23cabc3f-c004-4e28-92d7-37b291183abd and capabilities Capabilities {appium:automationName: Appium, appium:newCommandTimeout: 120, appium:platformVersion: 11, browserName: chrome, goog:chromeOptions: {args: [--disable-translate, --disable-web-security, --disable-site-isolation-tr..., --disable-features=IsolateO...], extensions: []}, nativeWebScreenshot: true, pageLoadStrategy: normal, platformName: ANDROID}
12:01:49.131 INFO [LocalDistributor.newSession] - Unable to find a free slot for request 23cabc3f-c004-4e28-92d7-37b291183abd.
```
On node, logs are (repeated several times per second)
```
12:04:39.910 INFO [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
POST /se/grid/node/session HTTP/1.1
User-Agent: selenium/4.7.2 (java windows)
X-REGISTRATION-SECRET:
traceparent: 00-eee5ea07c6ca0e888db260440fc0a90f-639f321f239d9edb-01
transfer-encoding: chunked
host: 127.0.0.1:5555
accept: */*
12:04:39.910 INFO [RequestConverter.channelRead0] - Incoming message: DefaultHttpContent(data: PooledSlicedByteBuf(ridx: 0, widx: 604, cap: 604/604, unwrapped: PooledUnsafeDirectByteBuf(ridx: 851, widx: 851, cap: 8192)), decoderResult: success)
12:04:39.910 INFO [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
12:04:39.910 INFO [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
12:04:39.913 WARN [SeleniumSpanExporter$1.lambda$export$3] - {"traceId": "eee5ea07c6ca0e888db260440fc0a90f","eventTime": 1671015879910983900,"eventName": "No slot matched the requested capabilities. ","attributes": {"current.session.count": 0,"logger": "org.openqa.selenium.grid.node.local.LocalNode","session.request.capabilities": "Capabilities {appium:automationName: Appium, appium:newCommandTimeout: 120, appium:platformVersion: 11, browserName: chrome, goog:chromeOptions: {args: [--disable-translate, --disable-web-security, --disable-site-isolation-tr..., --disable-features=IsolateO...], extensions: []}, nativeWebScreenshot: true, pageLoadStrategy: normal, platformName: ANDROID}","session.request.downstreamdialect": "[W3C, OSS]"}}
```
The problem seems to be that, in LocalNode.newSession() method, when node checks for the appium slot to be available, it's seen as unavailable, as if it has not been released after the first failure
Hub/node loops on retry until session-request-timeout
As a side note, I've also seen this behaviour some times with a simple browser session, but could not reproduce this time
### How can we reproduce the issue?
```shell
No script just the above setup
Node TOML Configuration
-----------------------
[node]
detect-drivers = false
[relay]
url = "http://localhost:4723/wd/hub"
status-endpoint = "/status"
configs = ["1","{\"browserName\":\"chrome\",\"appium:deviceName\":\"sdk_gphone_x86_64\",\"platformName\":\"ANDROID\",\"appium:platformVersion\":\"11\"}"]
```
### Relevant log output
```shell
12:04:39.910 INFO [RequestConverter.channelRead0] - Start of http request: DefaultHttpRequest(decodeResult: success, version: HTTP/1.1)
POST /se/grid/node/session HTTP/1.1
User-Agent: selenium/4.7.2 (java windows)
X-REGISTRATION-SECRET:
traceparent: 00-eee5ea07c6ca0e888db260440fc0a90f-639f321f239d9edb-01
transfer-encoding: chunked
host: 127.0.0.1:5555
accept: */*
12:04:39.910 INFO [RequestConverter.channelRead0] - Incoming message: DefaultHttpContent(data: PooledSlicedByteBuf(ridx: 0, widx: 604, cap: 604/604, unwrapped: PooledUnsafeDirectByteBuf(ridx: 851, widx: 851, cap: 8192)), decoderResult: success)
12:04:39.910 INFO [RequestConverter.channelRead0] - Incoming message: EmptyLastHttpContent
12:04:39.910 INFO [RequestConverter.channelRead0] - End of http request: EmptyLastHttpContent
12:04:39.913 WARN [SeleniumSpanExporter$1.lambda$export$3] - {"traceId": "eee5ea07c6ca0e888db260440fc0a90f","eventTime": 1671015879910983900,"eventName": "No slot matched the requested capabilities. ","attributes": {"current.session.count": 0,"logger": "org.openqa.selenium.grid.node.local.LocalNode","session.request.capabilities": "Capabilities {appium:automationName: Appium, appium:newCommandTimeout: 120, appium:platformVersion: 11, browserName: chrome, goog:chromeOptions: {args: [--disable-translate, --disable-web-security, --disable-site-isolation-tr..., --disable-features=IsolateO...], extensions: []}, nativeWebScreenshot: true, pageLoadStrategy: normal, platformName: ANDROID}","session.request.downstreamdialect": "[W3C, OSS]"}}
```
### Operating System
Windows 10
### Selenium version
Java => Selenium 4.7.2 on grid
### What are the browser(s) and version(s) where you see this issue?
Android emulator / appium
### What are the browser driver(s) and version(s) where you see this issue?
Appium 1.22.3
### Are you using Selenium Grid?
4.7.2
|
defect
|
sessionslot not released if create session fails what happened hello i ve configured a grid as follows hub node with relay appium android emulator i try to create a session on the android device on the first try appium is correctly called but fails creating a session whatever the reason the important point is that it fails on first call in my case this is because i request the chrome browser without providing the right driver file so if i correctly understand the behaviour distributor tries to recreate the session but this time it fails quickly on hub logs are repeated several times per second no slots found for request and capabilities capabilities appium automationname appium appium newcommandtimeout appium platformversion browsername chrome goog chromeoptions args extensions nativewebscreenshot true pageloadstrategy normal platformname android info unable to find a free slot for request on node logs are repeated several times per second info start of http request defaulthttprequest decoderesult success version http post se grid node session http user agent selenium java windows x registration secret traceparent transfer encoding chunked host accept info incoming message defaulthttpcontent data pooledslicedbytebuf ridx widx cap unwrapped pooledunsafedirectbytebuf ridx widx cap decoderresult success info incoming message emptylasthttpcontent info end of http request emptylasthttpcontent warn traceid eventtime eventname no slot matched the requested capabilities attributes current session count logger org openqa selenium grid node local localnode session request capabilities capabilities appium automationname appium appium newcommandtimeout appium platformversion browsername chrome goog chromeoptions args extensions nativewebscreenshot true pageloadstrategy normal platformname android session request downstreamdialect the problem seems to be that in localnode newsession method when node checks for the appium slot to be available it s seen as unavailable as if it has not been released after the first failure hub node loops on retry until session request timeout as a side note i ve also seen this behaviour some times with a simple browser session but could not reproduce this time how can we reproduce the issue shell no script just the above setup node toml configuration detect drivers false url status endpoint status configs relevant log output shell info start of http request defaulthttprequest decoderesult success version http post se grid node session http user agent selenium java windows x registration secret traceparent transfer encoding chunked host accept info incoming message defaulthttpcontent data pooledslicedbytebuf ridx widx cap unwrapped pooledunsafedirectbytebuf ridx widx cap decoderresult success info incoming message emptylasthttpcontent info end of http request emptylasthttpcontent warn traceid eventtime eventname no slot matched the requested capabilities attributes current session count logger org openqa selenium grid node local localnode session request capabilities capabilities appium automationname appium appium newcommandtimeout appium platformversion browsername chrome goog chromeoptions args extensions nativewebscreenshot true pageloadstrategy normal platformname android session request downstreamdialect operating system windows selenium version java selenium on grid what are the browser s and version s where you see this issue android emulator appium what are the browser driver s and version s where you see this issue appium are you using selenium grid
| 1
|
123,084
| 4,857,042,036
|
IssuesEvent
|
2016-11-12 11:27:08
|
japanesemediamanager/ShokoServer
|
https://api.github.com/repos/japanesemediamanager/ShokoServer
|
closed
|
"The calling thread cannot access this object because a different thread owns it." at the end of an upgrade from 3.6.1.0
|
Bug - High Priority In Progress
|
I have this error at the end of the upgrade processus in 3.7 beta from 3.6.1.0
[related_log.txt](https://github.com/japanesemediamanager/ShokoServer/files/584694/related_log.txt)
|
1.0
|
"The calling thread cannot access this object because a different thread owns it." at the end of an upgrade from 3.6.1.0 - I have this error at the end of the upgrade processus in 3.7 beta from 3.6.1.0
[related_log.txt](https://github.com/japanesemediamanager/ShokoServer/files/584694/related_log.txt)
|
non_defect
|
the calling thread cannot access this object because a different thread owns it at the end of an upgrade from i have this error at the end of the upgrade processus in beta from
| 0
|
71,919
| 9,543,998,358
|
IssuesEvent
|
2019-05-01 12:44:35
|
Arquisoft/dechat_en1b
|
https://api.github.com/repos/Arquisoft/dechat_en1b
|
closed
|
Be able to access the documentation through the deployed app
|
documentation help wanted implementation
|
We have to put a new button or whatever fits good to open the documentation directly from the application currently deployed in the Arquisoft page of the repository.
|
1.0
|
Be able to access the documentation through the deployed app - We have to put a new button or whatever fits good to open the documentation directly from the application currently deployed in the Arquisoft page of the repository.
|
non_defect
|
be able to access the documentation through the deployed app we have to put a new button or whatever fits good to open the documentation directly from the application currently deployed in the arquisoft page of the repository
| 0
|
4,527
| 3,036,628,953
|
IssuesEvent
|
2015-08-06 13:07:41
|
smathot/OpenSesame
|
https://api.github.com/repos/smathot/OpenSesame
|
closed
|
Python 3 compatibility
|
Code enhancement
|
The goal is to support:
- Python 2.7 (current default)
- Python >= 3.3
|
1.0
|
Python 3 compatibility - The goal is to support:
- Python 2.7 (current default)
- Python >= 3.3
|
non_defect
|
python compatibility the goal is to support python current default python
| 0
|
456,997
| 13,151,052,741
|
IssuesEvent
|
2020-08-09 14:51:31
|
chrisjsewell/docutils
|
https://api.github.com/repos/chrisjsewell/docutils
|
closed
|
Fix NameError in latex2e writer. [SF:patches:63]
|
closed-accepted patches priority-5
|
author: mgeisler
created: 2009-08-30 22:21:51
assigned: None
SF_url: https://sourceforge.net/p/docutils/patches/63
A class variable was accessed as a global variable -- this resulted in a bunch of test failures.
---
commenter: mgeisler
posted: 2009-08-30 22:21:53
title: #63 Fix NameError in latex2e writer.
attachments:
- https://sourceforge.net/p/docutils/patches/_discuss/thread/be9cc43b/9071/attachment/nameerror.diff
---
commenter: goodger
posted: 2009-08-31 13:59:38
title: #63 Fix NameError in latex2e writer.
Thank you for your contribution\! It has been checked in to the
Docutils repository.
You can download the most current snapshot from:
http://docutils.sourceforge.net/docutils-snapshot.tgz
---
commenter: goodger
posted: 2009-08-31 13:59:39
title: #63 Fix NameError in latex2e writer.
- **status**: open --> closed-accepted
---
commenter: goodger
posted: 2009-08-31 13:59:39
title: #63 Fix NameError in latex2e writer.
Checked in by Günter Milde.
See https://sourceforge.net/mailarchive/message.php?msg\_name=h7fsp3%24dor%241%40ger.gmane.org
|
1.0
|
Fix NameError in latex2e writer. [SF:patches:63] -
author: mgeisler
created: 2009-08-30 22:21:51
assigned: None
SF_url: https://sourceforge.net/p/docutils/patches/63
A class variable was accessed as a global variable -- this resulted in a bunch of test failures.
---
commenter: mgeisler
posted: 2009-08-30 22:21:53
title: #63 Fix NameError in latex2e writer.
attachments:
- https://sourceforge.net/p/docutils/patches/_discuss/thread/be9cc43b/9071/attachment/nameerror.diff
---
commenter: goodger
posted: 2009-08-31 13:59:38
title: #63 Fix NameError in latex2e writer.
Thank you for your contribution\! It has been checked in to the
Docutils repository.
You can download the most current snapshot from:
http://docutils.sourceforge.net/docutils-snapshot.tgz
---
commenter: goodger
posted: 2009-08-31 13:59:39
title: #63 Fix NameError in latex2e writer.
- **status**: open --> closed-accepted
---
commenter: goodger
posted: 2009-08-31 13:59:39
title: #63 Fix NameError in latex2e writer.
Checked in by Günter Milde.
See https://sourceforge.net/mailarchive/message.php?msg\_name=h7fsp3%24dor%241%40ger.gmane.org
|
non_defect
|
fix nameerror in writer author mgeisler created assigned none sf url a class variable was accessed as a global variable this resulted in a bunch of test failures commenter mgeisler posted title fix nameerror in writer attachments commenter goodger posted title fix nameerror in writer thank you for your contribution it has been checked in to the docutils repository you can download the most current snapshot from commenter goodger posted title fix nameerror in writer status open closed accepted commenter goodger posted title fix nameerror in writer checked in by günter milde see
| 0
|
155,640
| 24,493,324,588
|
IssuesEvent
|
2022-10-10 06:02:38
|
microsoft/FluidFramework
|
https://api.github.com/repos/microsoft/FluidFramework
|
closed
|
offline container load/attached container rehydration
|
feature-request design-required area: loader status: stale
|
Whiteboard wants to be able to load with stashed ops entirely offline. This is a problem because the current stashed ops design assumes we will be able to connect, load from a snapshot prior to the reference sequence number of the first stashed op, and apply the stashed ops at their reference sequence number as the ops are replayed in order to arrive at the same state as the original container before resubmitting.
One option would be to save the snapshot from which we loaded along with all of the ops after it. For long-lived clients, the amount of ops could grow to be very large, and we would have to do this for any client that might want to stash ops at some point, meaning it would negatively effect performance for all clients at all times, not just when stashing/reloading.
Another option would be to save a snapshot of the document when `Container.serialize()` is called, similar to how serialize/rehydrate works in detached containers. The problem with this approach is that we don't currently have the ability to take a snapshot of a container that doesn't include local changes. In a detached container, including local changes is desired, but once attached it is not, which is why we currently spawn a new container that makes no local changes to take the snapshot. So in order to create a new snapshot to be returned in `serialize()`, we would need to already have spawned a summarizing container. As above, this would need to be done for any client who might potentially want to stash changes and reapply them offline. Additionally, I don't believe it would be possible to take this snapshot at the time `serialize()` is called since the pending ops may have reference sequence numbers earlier than the snapshot, meaning they could not be correctly applied and resubmitted in the new container. So this would still need to work similar to above, taking snapshots periodically and always keeping one older than our oldest pending op, along with all ops that come after it.
Another option would be to simply download new summaries as they are posted in order to reduce the amount of ops we need to store. The downside to this would be the increased bandwidth required to download the summaries. This would also need to be done by all clients that might need to stash changes and then load offline.
|
1.0
|
offline container load/attached container rehydration - Whiteboard wants to be able to load with stashed ops entirely offline. This is a problem because the current stashed ops design assumes we will be able to connect, load from a snapshot prior to the reference sequence number of the first stashed op, and apply the stashed ops at their reference sequence number as the ops are replayed in order to arrive at the same state as the original container before resubmitting.
One option would be to save the snapshot from which we loaded along with all of the ops after it. For long-lived clients, the amount of ops could grow to be very large, and we would have to do this for any client that might want to stash ops at some point, meaning it would negatively effect performance for all clients at all times, not just when stashing/reloading.
Another option would be to save a snapshot of the document when `Container.serialize()` is called, similar to how serialize/rehydrate works in detached containers. The problem with this approach is that we don't currently have the ability to take a snapshot of a container that doesn't include local changes. In a detached container, including local changes is desired, but once attached it is not, which is why we currently spawn a new container that makes no local changes to take the snapshot. So in order to create a new snapshot to be returned in `serialize()`, we would need to already have spawned a summarizing container. As above, this would need to be done for any client who might potentially want to stash changes and reapply them offline. Additionally, I don't believe it would be possible to take this snapshot at the time `serialize()` is called since the pending ops may have reference sequence numbers earlier than the snapshot, meaning they could not be correctly applied and resubmitted in the new container. So this would still need to work similar to above, taking snapshots periodically and always keeping one older than our oldest pending op, along with all ops that come after it.
Another option would be to simply download new summaries as they are posted in order to reduce the amount of ops we need to store. The downside to this would be the increased bandwidth required to download the summaries. This would also need to be done by all clients that might need to stash changes and then load offline.
|
non_defect
|
offline container load attached container rehydration whiteboard wants to be able to load with stashed ops entirely offline this is a problem because the current stashed ops design assumes we will be able to connect load from a snapshot prior to the reference sequence number of the first stashed op and apply the stashed ops at their reference sequence number as the ops are replayed in order to arrive at the same state as the original container before resubmitting one option would be to save the snapshot from which we loaded along with all of the ops after it for long lived clients the amount of ops could grow to be very large and we would have to do this for any client that might want to stash ops at some point meaning it would negatively effect performance for all clients at all times not just when stashing reloading another option would be to save a snapshot of the document when container serialize is called similar to how serialize rehydrate works in detached containers the problem with this approach is that we don t currently have the ability to take a snapshot of a container that doesn t include local changes in a detached container including local changes is desired but once attached it is not which is why we currently spawn a new container that makes no local changes to take the snapshot so in order to create a new snapshot to be returned in serialize we would need to already have spawned a summarizing container as above this would need to be done for any client who might potentially want to stash changes and reapply them offline additionally i don t believe it would be possible to take this snapshot at the time serialize is called since the pending ops may have reference sequence numbers earlier than the snapshot meaning they could not be correctly applied and resubmitted in the new container so this would still need to work similar to above taking snapshots periodically and always keeping one older than our oldest pending op along with all ops that come after it another option would be to simply download new summaries as they are posted in order to reduce the amount of ops we need to store the downside to this would be the increased bandwidth required to download the summaries this would also need to be done by all clients that might need to stash changes and then load offline
| 0
|
316,087
| 23,614,500,412
|
IssuesEvent
|
2022-08-24 14:50:50
|
ruithnadsteud/yt_georaster
|
https://api.github.com/repos/ruithnadsteud/yt_georaster
|
opened
|
Documentation needed for new features from PR #64
|
documentation
|
PR #64 introduced a number of new features that need documentation. These include:
- added: up/down-scaling by scalar factor at load
- added: nodata value can be defined at load to use internally (sometimes useful when geotiff's metadata doesn't have a consistent nodata value)
- added: select your rasterio resampling method at load (not always nearest neighbour anymore)
- fixed: coordinate reference system to work in can be defined at load
- added: polygons allows you to map a list of polygons/shapefiles to create a list of yt polygon selections
- enhanced: field_map now works for multiple files
- added: get_field_as_raster_array function outputs window in 2d grid as callable
|
1.0
|
Documentation needed for new features from PR #64 - PR #64 introduced a number of new features that need documentation. These include:
- added: up/down-scaling by scalar factor at load
- added: nodata value can be defined at load to use internally (sometimes useful when geotiff's metadata doesn't have a consistent nodata value)
- added: select your rasterio resampling method at load (not always nearest neighbour anymore)
- fixed: coordinate reference system to work in can be defined at load
- added: polygons allows you to map a list of polygons/shapefiles to create a list of yt polygon selections
- enhanced: field_map now works for multiple files
- added: get_field_as_raster_array function outputs window in 2d grid as callable
|
non_defect
|
documentation needed for new features from pr pr introduced a number of new features that need documentation these include added up down scaling by scalar factor at load added nodata value can be defined at load to use internally sometimes useful when geotiff s metadata doesn t have a consistent nodata value added select your rasterio resampling method at load not always nearest neighbour anymore fixed coordinate reference system to work in can be defined at load added polygons allows you to map a list of polygons shapefiles to create a list of yt polygon selections enhanced field map now works for multiple files added get field as raster array function outputs window in grid as callable
| 0
|
43,124
| 11,492,355,520
|
IssuesEvent
|
2020-02-11 20:49:34
|
telus/tds-core
|
https://api.github.com/repos/telus/tds-core
|
closed
|
Nested <FlexGrid> in a <Box> will over ride the <FlexGrid> padding for limitWidth prop
|
priority: medium status: in progress type: defect :bug:
|
## Description
Nesting a `<FlexGrid>`, with or without the `limitWidth` prop, under a `<Box>` component will over ride the padding
## Reproduction Steps
1. View the `Box` component
2. Nest `<FlexGrid>` under the `<Box>` component
3. Give `<FlexGrid>` the `limitWidth={false}` prop
4. Notice the `<Box>` component is overriding the padding
## Workaround details
N/A
## Recommendation
N/A
## Meta
- TDS component version: @tds/core-flex-grid@3.x
- Willing to develop solution: No
- High impact: Yes
## Screenshots

|
1.0
|
Nested <FlexGrid> in a <Box> will over ride the <FlexGrid> padding for limitWidth prop - ## Description
Nesting a `<FlexGrid>`, with or without the `limitWidth` prop, under a `<Box>` component will over ride the padding
## Reproduction Steps
1. View the `Box` component
2. Nest `<FlexGrid>` under the `<Box>` component
3. Give `<FlexGrid>` the `limitWidth={false}` prop
4. Notice the `<Box>` component is overriding the padding
## Workaround details
N/A
## Recommendation
N/A
## Meta
- TDS component version: @tds/core-flex-grid@3.x
- Willing to develop solution: No
- High impact: Yes
## Screenshots

|
defect
|
nested in a will over ride the padding for limitwidth prop description nesting a with or without the limitwidth prop under a component will over ride the padding reproduction steps view the box component nest under the component give the limitwidth false prop notice the component is overriding the padding workaround details n a recommendation n a meta tds component version tds core flex grid x willing to develop solution no high impact yes screenshots
| 1
|
79,139
| 28,012,792,253
|
IssuesEvent
|
2023-03-27 19:59:09
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
[Components and pattern standards] Design component or pattern in use isn't mobile-friendly. (04.09.1)
|
content Public Websites design 508/Accessibility ia 508-defect-2 collab-cycle-feedback Staging CCIssue04.09 CC-Dashboard CMS-Public-Websites sitewide-header
|
### General Information
#### VFS team name
Public Websites
#### VFS product name
Sitwwide Header
#### VFS feature name
IE11 Deprecation Alert
#### Point of Contact/Reviewers
Brian DeConinck (@briandeconinck) - Accessibility
*For more information on how to interpret this ticket, please refer to the [Anatomy of a Staging Review issue ticket](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/Anatomy-of-a-Staging-Review-Issue-ticket.2060320997.html) guidance on Platform Website.
---
### Platform Issue
Design component or pattern in use isn't mobile-friendly.
### Issue Details
At zoom greater than 200%, the deprecation alert is not visible. This impacts low-vision users who keep their browser at high zoom levels to improve readability. Per [WCAG 1.4.10](https://www.w3.org/WAI/WCAG21/Understanding/reflow.html), we support up to 400% zoom.
For users who (1) use IE 11 and (2) keep their browser at zoom levels higher than 200%, this is information that's just missing. Since the deprecation alert is advisory only and there's no immediate loss of functionality, I'm marking this with label `508-defect-2` and not categorizing it as a launch-blocker. But please look at resolving soon after launch.
### Link, screenshot or steps to recreate
1. Navigate to the review instance in IE11 or a browser set to the IE11 user agent string.
2. Resize your browser window to a starting browser viewport of 1280px width.
3. Use `Ctrl` + `+` to change your zoom level.
4. Note that the deprecation alert is visible up to 200%, but disappears at zoom levels beyond that.
### VA.gov Experience Standard
[Category Number 04, Issue Number 09](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
WCAG SC 1.4.10 AA
---
### Platform Recommendation
It looks like the deprecation alert is only present in the `#legacy-header` div for the tablet/desktop header, and not included in the mobile instance of the header. That makes sense since IE11 isn't available on mobile devices, but that assumption breaks down for high zoom users. Browser zoom triggers responsive breakpoints, and low-vision users are frequently viewing the "mobile" layout of a page on a laptop or desktop.
Recommendation: Include the deprecation alert in the mobile layout too. I don't think much needs to happen to the banner itself to make it high-zoom friendly, it just needs to render.
### VFS Team Tasks to Complete
- [ ] Comment on the ticket if there are questions or concerns
- [ ] Close the ticket when the issue has been resolved or validated by your Product Owner. If a team has additional questions or needs Platform help validating the issue, please comment in the ticket.
|
1.0
|
[Components and pattern standards] Design component or pattern in use isn't mobile-friendly. (04.09.1) - ### General Information
#### VFS team name
Public Websites
#### VFS product name
Sitwwide Header
#### VFS feature name
IE11 Deprecation Alert
#### Point of Contact/Reviewers
Brian DeConinck (@briandeconinck) - Accessibility
*For more information on how to interpret this ticket, please refer to the [Anatomy of a Staging Review issue ticket](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/Anatomy-of-a-Staging-Review-Issue-ticket.2060320997.html) guidance on Platform Website.
---
### Platform Issue
Design component or pattern in use isn't mobile-friendly.
### Issue Details
At zoom greater than 200%, the deprecation alert is not visible. This impacts low-vision users who keep their browser at high zoom levels to improve readability. Per [WCAG 1.4.10](https://www.w3.org/WAI/WCAG21/Understanding/reflow.html), we support up to 400% zoom.
For users who (1) use IE 11 and (2) keep their browser at zoom levels higher than 200%, this is information that's just missing. Since the deprecation alert is advisory only and there's no immediate loss of functionality, I'm marking this with label `508-defect-2` and not categorizing it as a launch-blocker. But please look at resolving soon after launch.
### Link, screenshot or steps to recreate
1. Navigate to the review instance in IE11 or a browser set to the IE11 user agent string.
2. Resize your browser window to a starting browser viewport of 1280px width.
3. Use `Ctrl` + `+` to change your zoom level.
4. Note that the deprecation alert is visible up to 200%, but disappears at zoom levels beyond that.
### VA.gov Experience Standard
[Category Number 04, Issue Number 09](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
WCAG SC 1.4.10 AA
---
### Platform Recommendation
It looks like the deprecation alert is only present in the `#legacy-header` div for the tablet/desktop header, and not included in the mobile instance of the header. That makes sense since IE11 isn't available on mobile devices, but that assumption breaks down for high zoom users. Browser zoom triggers responsive breakpoints, and low-vision users are frequently viewing the "mobile" layout of a page on a laptop or desktop.
Recommendation: Include the deprecation alert in the mobile layout too. I don't think much needs to happen to the banner itself to make it high-zoom friendly, it just needs to render.
### VFS Team Tasks to Complete
- [ ] Comment on the ticket if there are questions or concerns
- [ ] Close the ticket when the issue has been resolved or validated by your Product Owner. If a team has additional questions or needs Platform help validating the issue, please comment in the ticket.
|
defect
|
design component or pattern in use isn t mobile friendly general information vfs team name public websites vfs product name sitwwide header vfs feature name deprecation alert point of contact reviewers brian deconinck briandeconinck accessibility for more information on how to interpret this ticket please refer to the guidance on platform website platform issue design component or pattern in use isn t mobile friendly issue details at zoom greater than the deprecation alert is not visible this impacts low vision users who keep their browser at high zoom levels to improve readability per we support up to zoom for users who use ie and keep their browser at zoom levels higher than this is information that s just missing since the deprecation alert is advisory only and there s no immediate loss of functionality i m marking this with label defect and not categorizing it as a launch blocker but please look at resolving soon after launch link screenshot or steps to recreate navigate to the review instance in or a browser set to the user agent string resize your browser window to a starting browser viewport of width use ctrl to change your zoom level note that the deprecation alert is visible up to but disappears at zoom levels beyond that va gov experience standard other references wcag sc aa platform recommendation it looks like the deprecation alert is only present in the legacy header div for the tablet desktop header and not included in the mobile instance of the header that makes sense since isn t available on mobile devices but that assumption breaks down for high zoom users browser zoom triggers responsive breakpoints and low vision users are frequently viewing the mobile layout of a page on a laptop or desktop recommendation include the deprecation alert in the mobile layout too i don t think much needs to happen to the banner itself to make it high zoom friendly it just needs to render vfs team tasks to complete comment on the ticket if there are questions or concerns close the ticket when the issue has been resolved or validated by your product owner if a team has additional questions or needs platform help validating the issue please comment in the ticket
| 1
|
276,955
| 24,034,269,598
|
IssuesEvent
|
2022-09-15 17:36:51
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
closed
|
[chip-test] chip_sw_sleep_pin_wake
|
Component:ChipLevelTest Help Wanted : SW Help Wanted : DD
|
### Test point name
[chip_sw_sleep_pin_wake](https://github.com/lowRISC/opentitan/blob/b65e2705eb4ee814b46939d44f84d1adfee133c3/hw/top_earlgrey/data/chip_testplan.hjson#L375)
### Host side component
None Required
### OpenTitanTool infrastructure implemented
_No response_
### Contact person
@msfschaffner
### Checklist
Please fill out this checklist as items are completed. Link to PRs and issues as appropriate.
- [ ] Check if existing test covers most or all of this testpoint (if so, either extend said test to cover all points, or skip the next 3 checkboxes)
- [ ] Device-side (C) component developed
- [x] Bazel build rules developed
- [ ] Host-side component developed
- [x] HJSON test plan updated with test name (so it shows up in the dashboard)
- [ ] Test added to dvsim nightly regression (and passing at time of checking)
|
1.0
|
[chip-test] chip_sw_sleep_pin_wake - ### Test point name
[chip_sw_sleep_pin_wake](https://github.com/lowRISC/opentitan/blob/b65e2705eb4ee814b46939d44f84d1adfee133c3/hw/top_earlgrey/data/chip_testplan.hjson#L375)
### Host side component
None Required
### OpenTitanTool infrastructure implemented
_No response_
### Contact person
@msfschaffner
### Checklist
Please fill out this checklist as items are completed. Link to PRs and issues as appropriate.
- [ ] Check if existing test covers most or all of this testpoint (if so, either extend said test to cover all points, or skip the next 3 checkboxes)
- [ ] Device-side (C) component developed
- [x] Bazel build rules developed
- [ ] Host-side component developed
- [x] HJSON test plan updated with test name (so it shows up in the dashboard)
- [ ] Test added to dvsim nightly regression (and passing at time of checking)
|
non_defect
|
chip sw sleep pin wake test point name host side component none required opentitantool infrastructure implemented no response contact person msfschaffner checklist please fill out this checklist as items are completed link to prs and issues as appropriate check if existing test covers most or all of this testpoint if so either extend said test to cover all points or skip the next checkboxes device side c component developed bazel build rules developed host side component developed hjson test plan updated with test name so it shows up in the dashboard test added to dvsim nightly regression and passing at time of checking
| 0
|
52,760
| 7,783,911,097
|
IssuesEvent
|
2018-06-06 11:36:26
|
skroutz/mistry
|
https://api.github.com/repos/skroutz/mistry
|
opened
|
Add testing instructions
|
documentation
|
We will need some documentation about how to setup and run the test suite.
Since the testdata have Dockerfiles that communicate with the Internet (e.g. `FROM debian:stretch`) we could also suggest pulling the image locally before running the tests (this is to avoid potential timeouts during the test suite runs that may happen in case of slow internet connections).
|
1.0
|
Add testing instructions - We will need some documentation about how to setup and run the test suite.
Since the testdata have Dockerfiles that communicate with the Internet (e.g. `FROM debian:stretch`) we could also suggest pulling the image locally before running the tests (this is to avoid potential timeouts during the test suite runs that may happen in case of slow internet connections).
|
non_defect
|
add testing instructions we will need some documentation about how to setup and run the test suite since the testdata have dockerfiles that communicate with the internet e g from debian stretch we could also suggest pulling the image locally before running the tests this is to avoid potential timeouts during the test suite runs that may happen in case of slow internet connections
| 0
|
65,561
| 12,623,619,150
|
IssuesEvent
|
2020-06-14 00:11:48
|
fabricjs/fabric.js
|
https://api.github.com/repos/fabricjs/fabric.js
|
closed
|
subTargetCheck not working: Cannot interact with children in group
|
stale will be closed not adequate code sample
|
<!-- BUG TEMPLATE -->
## Version
<= 4.x
Cannot interact with children in group. group.subTargetCheck does not work like docs say.
http://jsfiddle.net/iyobo/bhpsfavr/4/
|
1.0
|
subTargetCheck not working: Cannot interact with children in group - <!-- BUG TEMPLATE -->
## Version
<= 4.x
Cannot interact with children in group. group.subTargetCheck does not work like docs say.
http://jsfiddle.net/iyobo/bhpsfavr/4/
|
non_defect
|
subtargetcheck not working cannot interact with children in group version x cannot interact with children in group group subtargetcheck does not work like docs say
| 0
|
281,850
| 8,700,400,979
|
IssuesEvent
|
2018-12-05 08:37:24
|
AICrowd/ai-crowd-3
|
https://api.github.com/repos/AICrowd/ai-crowd-3
|
closed
|
Style ActiveAdmin to match site styling
|
help wanted low priority
|
_From @seanfcarroll on September 01, 2017 22:24_
Style ActiveAdmin like the main site.
Probably want to wait for this https://github.com/activeadmin/activeadmin/pull/3862
_Copied from original issue: crowdAI/crowdai#302_
|
1.0
|
Style ActiveAdmin to match site styling - _From @seanfcarroll on September 01, 2017 22:24_
Style ActiveAdmin like the main site.
Probably want to wait for this https://github.com/activeadmin/activeadmin/pull/3862
_Copied from original issue: crowdAI/crowdai#302_
|
non_defect
|
style activeadmin to match site styling from seanfcarroll on september style activeadmin like the main site probably want to wait for this copied from original issue crowdai crowdai
| 0
|
87,585
| 3,755,974,896
|
IssuesEvent
|
2016-03-13 01:17:49
|
tomholub/cryptup-chrome
|
https://api.github.com/repos/tomholub/cryptup-chrome
|
closed
|
visual design: attachments (send + receive)
|
help wanted priority
|
I just added a way to send encrypted attachments (for now only text files) and also receive them.
I used a very basic design I'll show in the screenshots, and a better design would be awesome :)
**Sending attachments**
- attaching: it should show a nice list of attached files similar to normal gmail, but maybe green?
- it should show that attachments will be encrypted, (for now with a green lock icon next to each?), later with CryptUP icon (once we have it)
- I guess the "cancel" button should just be a simple X icon
- the "edit" button is bogus and will be removed later

**Receiving attachments**
- each attachment file box that is encrypted gets replaced with a little iframe
- you can style anything inside the iframe
- it should feel similar to normal gmail attachments, but show clearly (also in text) that the files are encrypted
- for now will also use the green lock icon, but I hope in future to replace it with CryptUP icon

Please post screenshots/photoshop deisgns here first (as a reply) and then we put it into code later
thanks!
|
1.0
|
visual design: attachments (send + receive) - I just added a way to send encrypted attachments (for now only text files) and also receive them.
I used a very basic design I'll show in the screenshots, and a better design would be awesome :)
**Sending attachments**
- attaching: it should show a nice list of attached files similar to normal gmail, but maybe green?
- it should show that attachments will be encrypted, (for now with a green lock icon next to each?), later with CryptUP icon (once we have it)
- I guess the "cancel" button should just be a simple X icon
- the "edit" button is bogus and will be removed later

**Receiving attachments**
- each attachment file box that is encrypted gets replaced with a little iframe
- you can style anything inside the iframe
- it should feel similar to normal gmail attachments, but show clearly (also in text) that the files are encrypted
- for now will also use the green lock icon, but I hope in future to replace it with CryptUP icon

Please post screenshots/photoshop deisgns here first (as a reply) and then we put it into code later
thanks!
|
non_defect
|
visual design attachments send receive i just added a way to send encrypted attachments for now only text files and also receive them i used a very basic design i ll show in the screenshots and a better design would be awesome sending attachments attaching it should show a nice list of attached files similar to normal gmail but maybe green it should show that attachments will be encrypted for now with a green lock icon next to each later with cryptup icon once we have it i guess the cancel button should just be a simple x icon the edit button is bogus and will be removed later receiving attachments each attachment file box that is encrypted gets replaced with a little iframe you can style anything inside the iframe it should feel similar to normal gmail attachments but show clearly also in text that the files are encrypted for now will also use the green lock icon but i hope in future to replace it with cryptup icon please post screenshots photoshop deisgns here first as a reply and then we put it into code later thanks
| 0
|
14,995
| 2,837,966,457
|
IssuesEvent
|
2015-05-27 02:56:16
|
kynikos/wiki-monkey
|
https://api.github.com/repos/kynikos/wiki-monkey
|
closed
|
Don't fix headings when editing a section
|
defects plugins
|
The "Fix headings" plugin/button should just blow up when a section is being edited, otherwise executing all the "Text plugins" will mess up the page.
Since all pages on ArchWiki are supposed to be categorized and the header is fixed before the headings in the "Text plugins" row, checking if the textarea starts with `[[Category:` might do the trick.
|
1.0
|
Don't fix headings when editing a section - The "Fix headings" plugin/button should just blow up when a section is being edited, otherwise executing all the "Text plugins" will mess up the page.
Since all pages on ArchWiki are supposed to be categorized and the header is fixed before the headings in the "Text plugins" row, checking if the textarea starts with `[[Category:` might do the trick.
|
defect
|
don t fix headings when editing a section the fix headings plugin button should just blow up when a section is being edited otherwise executing all the text plugins will mess up the page since all pages on archwiki are supposed to be categorized and the header is fixed before the headings in the text plugins row checking if the textarea starts with category might do the trick
| 1
|
54,147
| 13,440,967,554
|
IssuesEvent
|
2020-09-08 02:39:41
|
AquiverV/aquiver
|
https://api.github.com/repos/AquiverV/aquiver
|
closed
|
注册路由时需要根据是否加上/来进行兼容处理
|
defect
|
```java
@Path("/kafka")
public class KafkaRoute {
private static final Logger log = LoggerFactory.getLogger(KafkaRoute.class);
@GET("/info")
public void kafka() {
log.info("KafkaRoute kafka");
}
}
```
这里的Path注解和GET是否加 **/** 都应该可以通过 http://localhost:8080/kafka/info 进行匹配
|
1.0
|
注册路由时需要根据是否加上/来进行兼容处理 - ```java
@Path("/kafka")
public class KafkaRoute {
private static final Logger log = LoggerFactory.getLogger(KafkaRoute.class);
@GET("/info")
public void kafka() {
log.info("KafkaRoute kafka");
}
}
```
这里的Path注解和GET是否加 **/** 都应该可以通过 http://localhost:8080/kafka/info 进行匹配
|
defect
|
注册路由时需要根据是否加上 来进行兼容处理 java path kafka public class kafkaroute private static final logger log loggerfactory getlogger kafkaroute class get info public void kafka log info kafkaroute kafka 这里的path注解和get是否加 都应该可以通过 进行匹配
| 1
|
52,025
| 13,211,369,681
|
IssuesEvent
|
2020-08-15 22:38:38
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[ppc] opencl compile warnings (Trac #1516)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1516">https://code.icecube.wisc.edu/projects/icecube/ticket/1516</a>, reported by david.schultzand owned by dima</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"_ts": "1550067215093672",
"description": "Needs fixing:\n\n{{{\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx: In\n constructor \u2018xppc::gpu::gpu(int)\u2019:\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:162:59:\n warning: passing NULL to non-pointer argument 3 of \u2018cl_int\n clGetDeviceInfo(cl_device_id, cl_device_info, size_t, void *, size_t *)\u2019\n [-Wconversion-null]\nclGetDeviceInfo(devID, CL_DEVICE_VENDOR, NULL, NULL, &siz);\n^\n}}}\n\nNeed something like `#define CL_USE_DEPRECATED_OPENCL_1_1_APIS` to get rid of the below warnings.\n\n{{{\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:190:7:\n warning: \u2018_cl_command_queue * clCreateCommandQueue(\n cl_context, cl_device_id, cl_command_queue_properties, cl_int *)\u2019 is\n deprecated [-Wdeprecated-declarations]\ncq = clCreateCommandQueue(ctx, devID, CL_QUEUE_PROFILING_ENABLE, &err);\n checkError(err);\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1359:1: note: declared here\nclCreateCommandQueue(\n cl_context /* context */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:190:7:\n warning: \u2018_cl_command_queue * clCreateCommandQueue(\n cl_context, cl_device_id, cl_command_queue_properties, cl_int *)\u2019 is\n deprecated [-Wdeprecated-declarations]\ncq = clCreateCommandQueue(ctx, devID, CL_QUEUE_PROFILING_ENABLE, &err);\n checkError(err);\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1359:1: note: declared here\nclCreateCommandQueue(\n cl_context /* context */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:190:71:\n warning: \u2018_cl_command_queue * clCreateCommandQueue(\n cl_context, cl_device_id, cl_command_queue_properties, cl_int *)\u2019 is\n deprecated [-Wdeprecated-declarations]\ncq = clCreateCommandQueue(ctx, devID, CL_QUEUE_PROFILING_ENABLE, &err);\n checkError(err);\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1359:1: note: declared here\nclCreateCommandQueue(\n cl_context /* context */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx: In member\n function \u2018void xppc::gpu::kernel_f()\u2019:\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:381:13:\n warning: \u2018cl_int clEnqueueTask(\n cl_command_queue, cl_kernel, cl_uint, _cl_event * const *, _cl_event\n **)\u2019 is deprecated [-Wdeprecated-declarations]\ncheckError(clEnqueueTask(cq, clkernel, 0, NULL, NULL));\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1373:1: note: declared here\nclEnqueueTask(\n cl_command_queue /* command_queue */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:381:13:\n warning: \u2018cl_int clEnqueueTask(\n cl_command_queue, cl_kernel, cl_uint, _cl_event * const *, _cl_event\n **)\u2019 is deprecated [-Wdeprecated-declarations]\ncheckError(clEnqueueTask(cq, clkernel, 0, NULL, NULL));\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1373:1: note: declared here\nclEnqueueTask(\n cl_command_queue /* command_queue */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:381:54:\n warning: \u2018cl_int clEnqueueTask(\n cl_command_queue, cl_kernel, cl_uint, _cl_event * const *, _cl_event\n **)\u2019 is deprecated [-Wdeprecated-declarations]\ncheckError(clEnqueueTask(cq, clkernel, 0, NULL, NULL));\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1373:1: note: declared here\nclEnqueueTask(\n cl_command_queue /* command_queue */,\n^\n}}}",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"time": "2016-01-20T18:03:33",
"component": "combo simulation",
"summary": "[ppc] opencl compile warnings",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[ppc] opencl compile warnings (Trac #1516) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1516">https://code.icecube.wisc.edu/projects/icecube/ticket/1516</a>, reported by david.schultzand owned by dima</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"_ts": "1550067215093672",
"description": "Needs fixing:\n\n{{{\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx: In\n constructor \u2018xppc::gpu::gpu(int)\u2019:\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:162:59:\n warning: passing NULL to non-pointer argument 3 of \u2018cl_int\n clGetDeviceInfo(cl_device_id, cl_device_info, size_t, void *, size_t *)\u2019\n [-Wconversion-null]\nclGetDeviceInfo(devID, CL_DEVICE_VENDOR, NULL, NULL, &siz);\n^\n}}}\n\nNeed something like `#define CL_USE_DEPRECATED_OPENCL_1_1_APIS` to get rid of the below warnings.\n\n{{{\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:190:7:\n warning: \u2018_cl_command_queue * clCreateCommandQueue(\n cl_context, cl_device_id, cl_command_queue_properties, cl_int *)\u2019 is\n deprecated [-Wdeprecated-declarations]\ncq = clCreateCommandQueue(ctx, devID, CL_QUEUE_PROFILING_ENABLE, &err);\n checkError(err);\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1359:1: note: declared here\nclCreateCommandQueue(\n cl_context /* context */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:190:7:\n warning: \u2018_cl_command_queue * clCreateCommandQueue(\n cl_context, cl_device_id, cl_command_queue_properties, cl_int *)\u2019 is\n deprecated [-Wdeprecated-declarations]\ncq = clCreateCommandQueue(ctx, devID, CL_QUEUE_PROFILING_ENABLE, &err);\n checkError(err);\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1359:1: note: declared here\nclCreateCommandQueue(\n cl_context /* context */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:190:71:\n warning: \u2018_cl_command_queue * clCreateCommandQueue(\n cl_context, cl_device_id, cl_command_queue_properties, cl_int *)\u2019 is\n deprecated [-Wdeprecated-declarations]\ncq = clCreateCommandQueue(ctx, devID, CL_QUEUE_PROFILING_ENABLE, &err);\n checkError(err);\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1359:1: note: declared here\nclCreateCommandQueue(\n cl_context /* context */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx: In member\n function \u2018void xppc::gpu::kernel_f()\u2019:\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:381:13:\n warning: \u2018cl_int clEnqueueTask(\n cl_command_queue, cl_kernel, cl_uint, _cl_event * const *, _cl_event\n **)\u2019 is deprecated [-Wdeprecated-declarations]\ncheckError(clEnqueueTask(cq, clkernel, 0, NULL, NULL));\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1373:1: note: declared here\nclEnqueueTask(\n cl_command_queue /* command_queue */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:381:13:\n warning: \u2018cl_int clEnqueueTask(\n cl_command_queue, cl_kernel, cl_uint, _cl_event * const *, _cl_event\n **)\u2019 is deprecated [-Wdeprecated-declarations]\ncheckError(clEnqueueTask(cq, clkernel, 0, NULL, NULL));\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1373:1: note: declared here\nclEnqueueTask(\n cl_command_queue /* command_queue */,\n^\n/home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:381:54:\n warning: \u2018cl_int clEnqueueTask(\n cl_command_queue, cl_kernel, cl_uint, _cl_event * const *, _cl_event\n **)\u2019 is deprecated [-Wdeprecated-declarations]\ncheckError(clEnqueueTask(cq, clkernel, 0, NULL, NULL));\n^\nIn file included from /usr/include/CL/opencl.h:42:0,\nfrom /home/dschultz/Documents/combo/trunk/src/ppc/private/ppc/ocl/ppc.cxx:16:\n/usr/include/CL/cl.h:1373:1: note: declared here\nclEnqueueTask(\n cl_command_queue /* command_queue */,\n^\n}}}",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"time": "2016-01-20T18:03:33",
"component": "combo simulation",
"summary": "[ppc] opencl compile warnings",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
|
defect
|
opencl compile warnings trac migrated from json status closed changetime ts description needs fixing n n n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx in n constructor gpu gpu int n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n warning passing null to non pointer argument of int n clgetdeviceinfo cl device id cl device info size t void size t n nclgetdeviceinfo devid cl device vendor null null siz n n n nneed something like define cl use deprecated opencl apis to get rid of the below warnings n n n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n warning cl command queue clcreatecommandqueue n cl context cl device id cl command queue properties cl int is n deprecated ncq clcreatecommandqueue ctx devid cl queue profiling enable err n checkerror err n nin file included from usr include cl opencl h nfrom home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n usr include cl cl h note declared here nclcreatecommandqueue n cl context context n n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n warning cl command queue clcreatecommandqueue n cl context cl device id cl command queue properties cl int is n deprecated ncq clcreatecommandqueue ctx devid cl queue profiling enable err n checkerror err n nin file included from usr include cl opencl h nfrom home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n usr include cl cl h note declared here nclcreatecommandqueue n cl context context n n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n warning cl command queue clcreatecommandqueue n cl context cl device id cl command queue properties cl int is n deprecated ncq clcreatecommandqueue ctx devid cl queue profiling enable err n checkerror err n nin file included from usr include cl opencl h nfrom home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n usr include cl cl h note declared here nclcreatecommandqueue n cl context context n n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx in member n function xppc gpu kernel f n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n warning int clenqueuetask n cl command queue cl kernel cl uint cl event const cl event n is deprecated ncheckerror clenqueuetask cq clkernel null null n nin file included from usr include cl opencl h nfrom home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n usr include cl cl h note declared here nclenqueuetask n cl command queue command queue n n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n warning int clenqueuetask n cl command queue cl kernel cl uint cl event const cl event n is deprecated ncheckerror clenqueuetask cq clkernel null null n nin file included from usr include cl opencl h nfrom home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n usr include cl cl h note declared here nclenqueuetask n cl command queue command queue n n home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n warning int clenqueuetask n cl command queue cl kernel cl uint cl event const cl event n is deprecated ncheckerror clenqueuetask cq clkernel null null n nin file included from usr include cl opencl h nfrom home dschultz documents combo trunk src ppc private ppc ocl ppc cxx n usr include cl cl h note declared here nclenqueuetask n cl command queue command queue n n reporter david schultz cc olivas resolution fixed time component combo simulation summary opencl compile warnings priority major keywords milestone owner dima type defect
| 1
|
20,998
| 3,441,881,471
|
IssuesEvent
|
2015-12-14 20:16:43
|
wdg/blacktree-secrets
|
https://api.github.com/repos/wdg/blacktree-secrets
|
closed
|
Dock->Pinning options are incorrect
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Select "Dock"
2. Scroll to "Pinning"
3. Observe choice of "End", "Middle", and "Start"
What is the expected output? What do you see instead?
These choices must be in lower case, "end", "middle", and "start".
What version of the product are you using? On what operating system?
I updated "Secrets" on July 20th, 2014, it is running on OS-X 10.9.4.
Please provide any additional information below.
These strings were case-insensitive until OS-X 10.8.x, it is only 10.8.x and
10.9.x that require the strings to be in all lower case.
```
Original issue reported on code.google.com by `michaelgalassi` on 20 Jul 2014 at 6:02
|
1.0
|
Dock->Pinning options are incorrect - ```
What steps will reproduce the problem?
1. Select "Dock"
2. Scroll to "Pinning"
3. Observe choice of "End", "Middle", and "Start"
What is the expected output? What do you see instead?
These choices must be in lower case, "end", "middle", and "start".
What version of the product are you using? On what operating system?
I updated "Secrets" on July 20th, 2014, it is running on OS-X 10.9.4.
Please provide any additional information below.
These strings were case-insensitive until OS-X 10.8.x, it is only 10.8.x and
10.9.x that require the strings to be in all lower case.
```
Original issue reported on code.google.com by `michaelgalassi` on 20 Jul 2014 at 6:02
|
defect
|
dock pinning options are incorrect what steps will reproduce the problem select dock scroll to pinning observe choice of end middle and start what is the expected output what do you see instead these choices must be in lower case end middle and start what version of the product are you using on what operating system i updated secrets on july it is running on os x please provide any additional information below these strings were case insensitive until os x x it is only x and x that require the strings to be in all lower case original issue reported on code google com by michaelgalassi on jul at
| 1
|
267,820
| 28,509,239,380
|
IssuesEvent
|
2023-04-19 01:47:35
|
dpteam/RK3188_TABLET
|
https://api.github.com/repos/dpteam/RK3188_TABLET
|
closed
|
CVE-2015-8550 (High) detected in linuxv3.0.70 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2015-8550 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0.70</b></p></summary>
<p>
<p>Development tree</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/keithp/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/keithp/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/xen/interface/io/ring.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/xen/interface/io/ring.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/xen/interface/io/ring.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Xen, when used on a system providing PV backends, allows local guest OS administrators to cause a denial of service (host OS crash) or gain privileges by writing to memory shared between the frontend and backend, aka a double fetch vulnerability.
<p>Publish Date: 2016-04-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8550>CVE-2015-8550</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2015-8550">https://www.linuxkernelcves.com/cves/CVE-2015-8550</a></p>
<p>Release Date: 2016-04-14</p>
<p>Fix Resolution: v4.4-rc6,v3.12.58,v3.16.35,v3.2.76</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-8550 (High) detected in linuxv3.0.70 - autoclosed - ## CVE-2015-8550 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0.70</b></p></summary>
<p>
<p>Development tree</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/keithp/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/keithp/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/xen/interface/io/ring.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/xen/interface/io/ring.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/xen/interface/io/ring.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Xen, when used on a system providing PV backends, allows local guest OS administrators to cause a denial of service (host OS crash) or gain privileges by writing to memory shared between the frontend and backend, aka a double fetch vulnerability.
<p>Publish Date: 2016-04-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8550>CVE-2015-8550</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2015-8550">https://www.linuxkernelcves.com/cves/CVE-2015-8550</a></p>
<p>Release Date: 2016-04-14</p>
<p>Fix Resolution: v4.4-rc6,v3.12.58,v3.16.35,v3.2.76</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in autoclosed cve high severity vulnerability vulnerable library development tree library home page a href found in head commit a href found in base branch master vulnerable source files include xen interface io ring h include xen interface io ring h include xen interface io ring h vulnerability details xen when used on a system providing pv backends allows local guest os administrators to cause a denial of service host os crash or gain privileges by writing to memory shared between the frontend and backend aka a double fetch vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
19,750
| 3,252,144,454
|
IssuesEvent
|
2015-10-19 13:41:29
|
patric-r/jvmtop
|
https://api.github.com/repos/patric-r/jvmtop
|
closed
|
Doesn't work on OS X
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Try and run with OS X
2. Does not work. Says requires real JDK.
tools.jar is not packaged up as a separate jar in the OS X JDK. You don't need
to add it to the classpath like on Windows/Unix. I wrote and tested the
following fix:
if [ `uname` != 'Darwin' ] ; then
if [ ! -f "$TOOLSJAR" ] ; then
echo "$JAVA_HOME seems to be no JDK!" >&2
exit 1
fi
fi
```
Original issue reported on code.google.com by `hughes.m...@gmail.com` on 10 Sep 2013 at 11:51
|
1.0
|
Doesn't work on OS X - ```
What steps will reproduce the problem?
1. Try and run with OS X
2. Does not work. Says requires real JDK.
tools.jar is not packaged up as a separate jar in the OS X JDK. You don't need
to add it to the classpath like on Windows/Unix. I wrote and tested the
following fix:
if [ `uname` != 'Darwin' ] ; then
if [ ! -f "$TOOLSJAR" ] ; then
echo "$JAVA_HOME seems to be no JDK!" >&2
exit 1
fi
fi
```
Original issue reported on code.google.com by `hughes.m...@gmail.com` on 10 Sep 2013 at 11:51
|
defect
|
doesn t work on os x what steps will reproduce the problem try and run with os x does not work says requires real jdk tools jar is not packaged up as a separate jar in the os x jdk you don t need to add it to the classpath like on windows unix i wrote and tested the following fix if then if then echo java home seems to be no jdk exit fi fi original issue reported on code google com by hughes m gmail com on sep at
| 1
|
204,490
| 15,932,846,549
|
IssuesEvent
|
2021-04-14 06:33:44
|
bisoncorps/waihona
|
https://api.github.com/repos/bisoncorps/waihona
|
opened
|
Write Documentation for library
|
documentation enhancement good first issue help wanted
|
Documentation must explain a few key things
> Examples for using the various api's e.g get_blob, copy_blob, read, e.t.c
> Show import of various traits in order to be able to use trait methods
|
1.0
|
Write Documentation for library - Documentation must explain a few key things
> Examples for using the various api's e.g get_blob, copy_blob, read, e.t.c
> Show import of various traits in order to be able to use trait methods
|
non_defect
|
write documentation for library documentation must explain a few key things examples for using the various api s e g get blob copy blob read e t c show import of various traits in order to be able to use trait methods
| 0
|
45,598
| 12,906,731,506
|
IssuesEvent
|
2020-07-15 02:37:38
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
closed
|
Assertion failure in set_thread_state.hpp line 219
|
category: threadmanager tag: wontfix type: defect
|
## Expected Behavior
Expect Octo-Tiger rotating_star test problem to run without crashing
## Actual Behavior
During the initial grid setup Octo-Tiger crashes with an assertion failure on line 219 of set_thread_state.hpp
## Steps to Reproduce the Problem
Run the rotating_star test problem of Octo-Tiger on one locality. Most of the time the output ends on a seg fault without the actual assertion failure message printing.
## Specifications
HPX: Debug build of 2d244e1e27c6e014189a6cd59c474643b31fad4b
Octo-Tiger: Debug build of 187d2ab44e3888a5d141967b72ccf6c21b3fca1a
Ubuntu 18.04
GCC 7.3
Boost 1.65.1
Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
|
1.0
|
Assertion failure in set_thread_state.hpp line 219 - ## Expected Behavior
Expect Octo-Tiger rotating_star test problem to run without crashing
## Actual Behavior
During the initial grid setup Octo-Tiger crashes with an assertion failure on line 219 of set_thread_state.hpp
## Steps to Reproduce the Problem
Run the rotating_star test problem of Octo-Tiger on one locality. Most of the time the output ends on a seg fault without the actual assertion failure message printing.
## Specifications
HPX: Debug build of 2d244e1e27c6e014189a6cd59c474643b31fad4b
Octo-Tiger: Debug build of 187d2ab44e3888a5d141967b72ccf6c21b3fca1a
Ubuntu 18.04
GCC 7.3
Boost 1.65.1
Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
|
defect
|
assertion failure in set thread state hpp line expected behavior expect octo tiger rotating star test problem to run without crashing actual behavior during the initial grid setup octo tiger crashes with an assertion failure on line of set thread state hpp steps to reproduce the problem run the rotating star test problem of octo tiger on one locality most of the time the output ends on a seg fault without the actual assertion failure message printing specifications hpx debug build of octo tiger debug build of ubuntu gcc boost intel r core tm cpu
| 1
|
695,574
| 23,864,570,662
|
IssuesEvent
|
2022-09-07 09:54:02
|
canonical/charming-actions
|
https://api.github.com/repos/canonical/charming-actions
|
closed
|
[upload charm] Upload fails for "already_exists"
|
Type: Bug Priority: High Status: Triage
|
### Bug Description
Errors on upload-charm:
```
Error: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
Error: HttpError: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
```
### To Reproduce
Rerun this job:
https://github.com/canonical/alertmanager-k8s-operator/runs/7455137256?check_suite_focus=true
### Environment
GH runner.
### Relevant log output
```shell
Run canonical/charming-actions/upload-charm@1.0.0
/usr/bin/sudo snap install charmcraft --classic --channel latest/edge
charmcraft (edge) 2.0.0+6.g7563ce5 from Canonical** installed
/snap/bin/charmcraft pack --destructive-mode --quiet
/usr/bin/docker pull ubuntu/prometheus-alertmanager:0.23-22.04_beta
0.23-22.04_beta: Pulling from ubuntu/prometheus-alertmanager
bdc5a07a7dd2: Pulling fs layer
aa9fa10e633d: Pulling fs layer
e5dd500af9cc: Pulling fs layer
676141a7b9c0: Pulling fs layer
3c2910da3d19: Pulling fs layer
5fdd3cbd15aa: Pulling fs layer
78b2bc159136: Pulling fs layer
3c2910da3d19: Waiting
5fdd3cbd15aa: Waiting
78b2bc159136: Waiting
676141a7b9c0: Waiting
e5dd500af9cc: Verifying Checksum
e5dd500af9cc: Download complete
bdc5a07a7dd2: Verifying Checksum
bdc5a07a7dd2: Download complete
3c2910da3d19: Verifying Checksum
3c2910da3d19: Download complete
676141a7b9c0: Verifying Checksum
676141a7b9c0: Download complete
78b2bc159136: Verifying Checksum
78b2bc159136: Download complete
5fdd3cbd15aa: Verifying Checksum
5fdd3cbd15aa: Download complete
aa9fa10e633d: Verifying Checksum
aa9fa10e633d: Download complete
bdc5a07a7dd2: Pull complete
aa9fa10e633d: Pull complete
e5dd500af9cc: Pull complete
676141a7b9c0: Pull complete
3c2910da3d19: Pull complete
5fdd3cbd15aa: Pull complete
78b2bc159136: Pull complete
Digest: sha256:8f30b70d39053bf979ba646c1444287d71f7ece2342dea518106a6077197cd15
Status: Downloaded newer image for ubuntu/prometheus-alertmanager:0.23-22.04_beta
docker.io/ubuntu/prometheus-alertmanager:0.23-22.04_beta
/snap/bin/charmcraft upload-resource --quiet alertmanager-k8s alertmanager-image --image ubuntu/prometheus-alertmanager:0.23-22.04_beta
/snap/bin/charmcraft resource-revisions alertmanager-k8s alertmanager-image
Revision Created at Size
15 2022-07-21T18:08:50Z 515B
14 2022-07-20T15:27:02Z 515B
13 2022-07-15T11:48:59Z 515B
12 2022-07-06T15:54:06Z 515B
11 2022-07-05T23:27:14Z 515B
10 2022-07-03T20:54:42Z 515B
9 2022-06-30T09:06:59Z 515B
8 2022-06-28T03:41:22Z 515B
7 2022-05-30T21:35:04Z 515B
6 2022-04-28T20:38:02Z 515B
5 2022-03-24T14:04:40Z 515B
4 2022-03-22T15:37:00Z 515B
3 2022-03-17T20:09:33Z 515B
2 2022-03-16T21:47:46Z 515B
1 2021-07-21T18:28:17Z 515B
/snap/bin/charmcraft upload --quiet --release latest/edge /home/runner/work/alertmanager-k8s-operator/alertmanager-k8s-operator/alertmanager-k8s_ubuntu-20.04-amd64.charm --resource=alertmanager-image:15
Error: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
Error: HttpError: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
at /home/runner/work/_actions/canonical/charming-actions/1.0.0/dist/upload-charm/index.js:7440:21
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Total size of all the files uploaded is 9165 bytes
Finished uploading artifact charmcraft-logs. Reported size is 9165 bytes. There were 0 items that failed to upload
Artifact upload result: {"artifactName":"charmcraft-logs","artifactItems":["/home/runner/snap/charmcraft/common/cache/charmcraft/log/charmcraft-20220721-175635.208482.log","/home/runner/snap/charmcraft/common/cache/charmcraft/log/charmcraft-20220721-180804.853589.log","/home/runner/snap/charmcraft/common/cache/charmcraft/log/charmcraft-20220721-180853.491465.log","/home/runner/snap/charmcraft/common/cache/charmcraft/log/charmcraft-20220721-180855.218908.log"],"size":9165,"failedItems":[]}
```
### Additional context
_No response_
|
1.0
|
[upload charm] Upload fails for "already_exists" - ### Bug Description
Errors on upload-charm:
```
Error: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
Error: HttpError: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
```
### To Reproduce
Rerun this job:
https://github.com/canonical/alertmanager-k8s-operator/runs/7455137256?check_suite_focus=true
### Environment
GH runner.
### Relevant log output
```shell
Run canonical/charming-actions/upload-charm@1.0.0
/usr/bin/sudo snap install charmcraft --classic --channel latest/edge
charmcraft (edge) 2.0.0+6.g7563ce5 from Canonical** installed
/snap/bin/charmcraft pack --destructive-mode --quiet
/usr/bin/docker pull ubuntu/prometheus-alertmanager:0.23-22.04_beta
0.23-22.04_beta: Pulling from ubuntu/prometheus-alertmanager
bdc5a07a7dd2: Pulling fs layer
aa9fa10e633d: Pulling fs layer
e5dd500af9cc: Pulling fs layer
676141a7b9c0: Pulling fs layer
3c2910da3d19: Pulling fs layer
5fdd3cbd15aa: Pulling fs layer
78b2bc159136: Pulling fs layer
3c2910da3d19: Waiting
5fdd3cbd15aa: Waiting
78b2bc159136: Waiting
676141a7b9c0: Waiting
e5dd500af9cc: Verifying Checksum
e5dd500af9cc: Download complete
bdc5a07a7dd2: Verifying Checksum
bdc5a07a7dd2: Download complete
3c2910da3d19: Verifying Checksum
3c2910da3d19: Download complete
676141a7b9c0: Verifying Checksum
676141a7b9c0: Download complete
78b2bc159136: Verifying Checksum
78b2bc159136: Download complete
5fdd3cbd15aa: Verifying Checksum
5fdd3cbd15aa: Download complete
aa9fa10e633d: Verifying Checksum
aa9fa10e633d: Download complete
bdc5a07a7dd2: Pull complete
aa9fa10e633d: Pull complete
e5dd500af9cc: Pull complete
676141a7b9c0: Pull complete
3c2910da3d19: Pull complete
5fdd3cbd15aa: Pull complete
78b2bc159136: Pull complete
Digest: sha256:8f30b70d39053bf979ba646c1444287d71f7ece2342dea518106a6077197cd15
Status: Downloaded newer image for ubuntu/prometheus-alertmanager:0.23-22.04_beta
docker.io/ubuntu/prometheus-alertmanager:0.23-22.04_beta
/snap/bin/charmcraft upload-resource --quiet alertmanager-k8s alertmanager-image --image ubuntu/prometheus-alertmanager:0.23-22.04_beta
/snap/bin/charmcraft resource-revisions alertmanager-k8s alertmanager-image
Revision Created at Size
15 2022-07-21T18:08:50Z 515B
14 2022-07-20T15:27:02Z 515B
13 2022-07-15T11:48:59Z 515B
12 2022-07-06T15:54:06Z 515B
11 2022-07-05T23:27:14Z 515B
10 2022-07-03T20:54:42Z 515B
9 2022-06-30T09:06:59Z 515B
8 2022-06-28T03:41:22Z 515B
7 2022-05-30T21:35:04Z 515B
6 2022-04-28T20:38:02Z 515B
5 2022-03-24T14:04:40Z 515B
4 2022-03-22T15:37:00Z 515B
3 2022-03-17T20:09:33Z 515B
2 2022-03-16T21:47:46Z 515B
1 2021-07-21T18:28:17Z 515B
/snap/bin/charmcraft upload --quiet --release latest/edge /home/runner/work/alertmanager-k8s-operator/alertmanager-k8s-operator/alertmanager-k8s_ubuntu-20.04-amd64.charm --resource=alertmanager-image:15
Error: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
Error: HttpError: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
at /home/runner/work/_actions/canonical/charming-actions/1.0.0/dist/upload-charm/index.js:7440:21
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Total size of all the files uploaded is 9165 bytes
Finished uploading artifact charmcraft-logs. Reported size is 9165 bytes. There were 0 items that failed to upload
Artifact upload result: {"artifactName":"charmcraft-logs","artifactItems":["/home/runner/snap/charmcraft/common/cache/charmcraft/log/charmcraft-20220721-175635.208482.log","/home/runner/snap/charmcraft/common/cache/charmcraft/log/charmcraft-20220721-180804.853589.log","/home/runner/snap/charmcraft/common/cache/charmcraft/log/charmcraft-20220721-180853.491465.log","/home/runner/snap/charmcraft/common/cache/charmcraft/log/charmcraft-20220721-180855.218908.log"],"size":9165,"failedItems":[]}
```
### Additional context
_No response_
|
non_defect
|
upload fails for already exists bug description errors on upload charm error validation failed resource release code already exists field tag name error httperror validation failed resource release code already exists field tag name to reproduce rerun this job environment gh runner relevant log output shell run canonical charming actions upload charm usr bin sudo snap install charmcraft classic channel latest edge charmcraft edge from canonical installed snap bin charmcraft pack destructive mode quiet usr bin docker pull ubuntu prometheus alertmanager beta beta pulling from ubuntu prometheus alertmanager pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer waiting waiting waiting waiting verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete pull complete pull complete pull complete pull complete pull complete pull complete pull complete digest status downloaded newer image for ubuntu prometheus alertmanager beta docker io ubuntu prometheus alertmanager beta snap bin charmcraft upload resource quiet alertmanager alertmanager image image ubuntu prometheus alertmanager beta snap bin charmcraft resource revisions alertmanager alertmanager image revision created at size snap bin charmcraft upload quiet release latest edge home runner work alertmanager operator alertmanager operator alertmanager ubuntu charm resource alertmanager image error validation failed resource release code already exists field tag name error httperror validation failed resource release code already exists field tag name at home runner work actions canonical charming actions dist upload charm index js at processticksandrejections internal process task queues js total size of all the files uploaded is bytes finished uploading artifact charmcraft logs reported size is bytes there were items that failed to upload artifact upload result artifactname charmcraft logs artifactitems size faileditems additional context no response
| 0
|
23,069
| 3,756,128,568
|
IssuesEvent
|
2016-03-13 04:20:38
|
StarsOCV/eve-tf2hud
|
https://api.github.com/repos/StarsOCV/eve-tf2hud
|
closed
|
wont connect
|
auto-migrated OpSys-Windows Priority-Medium Type-Defect
|
```
Can you describe the problem?
the updater wont connect to my HUD and i cannot get the custom crosshairs
What's your installation path? D:\Steam\steamapps\common\Team Fortress 2
Where do you run the Updater from? i run it as an administrator from the
HUD Updater application
what system do you use? i use Windows 8 64 Bit
Please provide any additional information below.
```
Original issue reported on code.google.com by `abdullah...@gmail.com` on 16 Jan 2015 at 5:34
Attachments:
* [Untitled.png](https://storage.googleapis.com/google-code-attachments/eve-tf2hud/issue-120/comment-0/Untitled.png)
|
1.0
|
wont connect - ```
Can you describe the problem?
the updater wont connect to my HUD and i cannot get the custom crosshairs
What's your installation path? D:\Steam\steamapps\common\Team Fortress 2
Where do you run the Updater from? i run it as an administrator from the
HUD Updater application
what system do you use? i use Windows 8 64 Bit
Please provide any additional information below.
```
Original issue reported on code.google.com by `abdullah...@gmail.com` on 16 Jan 2015 at 5:34
Attachments:
* [Untitled.png](https://storage.googleapis.com/google-code-attachments/eve-tf2hud/issue-120/comment-0/Untitled.png)
|
defect
|
wont connect can you describe the problem the updater wont connect to my hud and i cannot get the custom crosshairs what s your installation path d steam steamapps common team fortress where do you run the updater from i run it as an administrator from the hud updater application what system do you use i use windows bit please provide any additional information below original issue reported on code google com by abdullah gmail com on jan at attachments
| 1
|
57,320
| 15,730,272,085
|
IssuesEvent
|
2021-03-29 15:43:58
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
Invalid multi-line comment produces cryptic internal error (Trac #306)
|
Incomplete Migration Migrated from Trac Other aggro80 defect
|
Migrated from https://trac.cppcheck.net/ticket/306
```json
{
"status": "closed",
"changetime": "2009-05-13T19:24:30",
"description": "Invalid multi-line comment produces an error which is not hugely informative:\n\n{{{### Error: Invalid number of character}}} \n\nIt would be nice if the error produced would be more like what {{{gcc -Wall}}} produces: \n\n{{{warning: multi-line comment}}}\n\nOffensive code snippet below:\n\n{{{\n#define PODFAIL(x) \\\n{ \\\n\t\t//printf(\"dtvpod| %s() %d |error| 0x%x \\r\\n\",__FUNCTION__,__LINE__,(x));\\\n\t\tL_ERROR(\"dtvpod| %s() %d |error| 0x%x \\r\\n\",__FUNCTION__,__LINE__,(x));\\\n} \n}}}\n",
"reporter": "skukkonen",
"cc": "",
"resolution": "fixed",
"_ts": "1242242670000000",
"component": "Other",
"summary": "Invalid multi-line comment produces cryptic internal error",
"priority": "",
"keywords": "",
"time": "2009-05-12T20:31:27",
"milestone": "1.33",
"owner": "aggro80",
"type": "defect"
}
```
|
1.0
|
Invalid multi-line comment produces cryptic internal error (Trac #306) - Migrated from https://trac.cppcheck.net/ticket/306
```json
{
"status": "closed",
"changetime": "2009-05-13T19:24:30",
"description": "Invalid multi-line comment produces an error which is not hugely informative:\n\n{{{### Error: Invalid number of character}}} \n\nIt would be nice if the error produced would be more like what {{{gcc -Wall}}} produces: \n\n{{{warning: multi-line comment}}}\n\nOffensive code snippet below:\n\n{{{\n#define PODFAIL(x) \\\n{ \\\n\t\t//printf(\"dtvpod| %s() %d |error| 0x%x \\r\\n\",__FUNCTION__,__LINE__,(x));\\\n\t\tL_ERROR(\"dtvpod| %s() %d |error| 0x%x \\r\\n\",__FUNCTION__,__LINE__,(x));\\\n} \n}}}\n",
"reporter": "skukkonen",
"cc": "",
"resolution": "fixed",
"_ts": "1242242670000000",
"component": "Other",
"summary": "Invalid multi-line comment produces cryptic internal error",
"priority": "",
"keywords": "",
"time": "2009-05-12T20:31:27",
"milestone": "1.33",
"owner": "aggro80",
"type": "defect"
}
```
|
defect
|
invalid multi line comment produces cryptic internal error trac migrated from json status closed changetime description invalid multi line comment produces an error which is not hugely informative n n error invalid number of character n nit would be nice if the error produced would be more like what gcc wall produces n n warning multi line comment n noffensive code snippet below n n n define podfail x n n t t printf dtvpod s d error x r n function line x n t tl error dtvpod s d error x r n function line x n n n reporter skukkonen cc resolution fixed ts component other summary invalid multi line comment produces cryptic internal error priority keywords time milestone owner type defect
| 1
|
70,659
| 23,279,752,682
|
IssuesEvent
|
2022-08-05 10:49:13
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
matrix.to link for already joined room sends you to matrix.to only on Desktop
|
T-Defect
|
### Steps to reproduce
Only applies to Element Desktop. The same link works as expected on Element Web.
1. Share a matrix.to link to a room by ID. Be sure that you are already member of this room.
2. Click the link
### Outcome
#### What did you expect?
Go to the room on click
#### What happened instead?
- Send to matrix.to
- Click „Continue“ there
- Send back to the room in Element Desktop
### Operating system
Ubuntu 22.04 LTS
### Application version
Element version: 1.11.1 Olm version: 3.2.8
### How did you install the app?
deb
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
matrix.to link for already joined room sends you to matrix.to only on Desktop - ### Steps to reproduce
Only applies to Element Desktop. The same link works as expected on Element Web.
1. Share a matrix.to link to a room by ID. Be sure that you are already member of this room.
2. Click the link
### Outcome
#### What did you expect?
Go to the room on click
#### What happened instead?
- Send to matrix.to
- Click „Continue“ there
- Send back to the room in Element Desktop
### Operating system
Ubuntu 22.04 LTS
### Application version
Element version: 1.11.1 Olm version: 3.2.8
### How did you install the app?
deb
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
matrix to link for already joined room sends you to matrix to only on desktop steps to reproduce only applies to element desktop the same link works as expected on element web share a matrix to link to a room by id be sure that you are already member of this room click the link outcome what did you expect go to the room on click what happened instead send to matrix to click „continue“ there send back to the room in element desktop operating system ubuntu lts application version element version olm version how did you install the app deb homeserver no response will you send logs no
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.