Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,586 | 8,793,842,774 | IssuesEvent | 2018-12-21 21:47:48 | facebookresearch/Detectron | https://api.github.com/repos/facebookresearch/Detectron | opened | FYI: Static content URLs will be changing soon (no more https://s3-us-west-2.amazonaws.com/detectron) | Maintainer-FYI-not-an-actual-issues | All static content URLs starting with
`https://s3-us-west-2.amazonaws.com/detectron`
will be changing to
`https://dl.fbaipublicfiles.com/detectron`
Soon after applying the change to this repo, we will permanently remove all downloads from `https://s3-us-west-2.amazonaws.com/detectron`. You will need to make the corresponding URL changes in your code if you have a fork that has diverged from this repo. | True | FYI: Static content URLs will be changing soon (no more https://s3-us-west-2.amazonaws.com/detectron) - All static content URLs starting with
`https://s3-us-west-2.amazonaws.com/detectron`
will be changing to
`https://dl.fbaipublicfiles.com/detectron`
Soon after applying the change to this repo, we will permanently remove all downloads from `https://s3-us-west-2.amazonaws.com/detectron`. You will need to make the corresponding URL changes in your code if you have a fork that has diverged from this repo. | main | fyi static content urls will be changing soon no more all static content urls starting with will be changing to soon after applying the change to this repo we will permanently remove all downloads from you will need to make the corresponding url changes in your code if you have a fork that has diverged from this repo | 1 |
673,795 | 23,031,539,799 | IssuesEvent | 2022-07-22 14:21:50 | containerbase/buildpack | https://api.github.com/repos/containerbase/buildpack | closed | Support erlang < 24 | type:bug priority-2-important status:ready | ```
> [2/3] RUN install-tool erlang:
#11 5.840 Setting up libltdl7:amd64 (2.4.6-14) ...
#11 5.845 Setting up libsctp1:amd64 (1.0.18+dfsg-1) ...
#11 5.855 Setting up libodbc1:amd64 (2.3.6-0.1build1) ...
#11 5.860 Processing triggers for libc-bin (2.31-0ubuntu9.7) ...
#11 12.18 linking tool erlang v22.3.4.25
#11 12.20 /opt/buildpack/tools/erlang/22.3.4.25/bin/erl: 29: exec: /usr/local/erlang/22.3.4.25/erts-10.7.2.17/bin/erlexec: not found
#11 12.20
#11 12.20 real 0m12.120s
#11 12.20 user 0m8.357s
#11 12.20 sys 0m1.717s
```
Elixir 1.7+ is still supported and needs erlang 19-22
https://hexdocs.pm/elixir/1.12.3/compatibility-and-deprecations.html#compatibility-between-elixir-and-erlang-otp | 1.0 | Support erlang < 24 - ```
> [2/3] RUN install-tool erlang:
#11 5.840 Setting up libltdl7:amd64 (2.4.6-14) ...
#11 5.845 Setting up libsctp1:amd64 (1.0.18+dfsg-1) ...
#11 5.855 Setting up libodbc1:amd64 (2.3.6-0.1build1) ...
#11 5.860 Processing triggers for libc-bin (2.31-0ubuntu9.7) ...
#11 12.18 linking tool erlang v22.3.4.25
#11 12.20 /opt/buildpack/tools/erlang/22.3.4.25/bin/erl: 29: exec: /usr/local/erlang/22.3.4.25/erts-10.7.2.17/bin/erlexec: not found
#11 12.20
#11 12.20 real 0m12.120s
#11 12.20 user 0m8.357s
#11 12.20 sys 0m1.717s
```
Elixir 1.7+ is still supported and needs erlang 19-22
https://hexdocs.pm/elixir/1.12.3/compatibility-and-deprecations.html#compatibility-between-elixir-and-erlang-otp | non_main | support erlang run install tool erlang setting up setting up dfsg setting up processing triggers for libc bin linking tool erlang opt buildpack tools erlang bin erl exec usr local erlang erts bin erlexec not found real user sys elixir is still supported and needs erlang | 0 |
4,416 | 22,716,892,618 | IssuesEvent | 2022-07-06 03:33:24 | usefulmove/comp | https://api.github.com/repos/usefulmove/comp | closed | Implement Interpreter::is_user_function() using std::option | good first issue maintainability | It could be cleaner and would be more idiomatic to implement the `Interpreter::is_user_function()` method using the `Option` return type from the [`std::option`](https://doc.rust-lang.org/std/option/) module. | True | Implement Interpreter::is_user_function() using std::option - It could be cleaner and would be more idiomatic to implement the `Interpreter::is_user_function()` method using the `Option` return type from the [`std::option`](https://doc.rust-lang.org/std/option/) module. | main | implement interpreter is user function using std option it could be cleaner and would be more idiomatic to implement the interpreter is user function method using the option return type from the module | 1 |
4,286 | 21,560,996,475 | IssuesEvent | 2022-05-01 06:34:43 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | closed | [MAINTAIN] r-cancerinsilico | maintain | <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
g++ -std=gnu++14 -I"/usr/include/R/" -DNDEBUG -I'/usr/lib/R/library/Rcpp/include' -I'/usr/lib/R/library/BH/include' -I/usr/local/include -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -Wp,-D_GLIBCXX_ASSERTIONS -flto=auto -c Tests/Core/test-SquareLattice.cpp -o Tests/Core/test-SquareLattice.o
In file included from /usr/include/signal.h:328,
from Tests/catch.h:6450,
from test-runner.cpp:2:
Tests/catch.h:6473:33: error: size of array ‘altStackMem’ is not an integral constant-expression
6473 | static char altStackMem[SIGSTKSZ];
| ^~~~~~~~
Tests/catch.h:6524:45: error: size of array ‘altStackMem’ is not an integral constant-expression
6524 | char FatalConditionHandler::altStackMem[SIGSTKSZ] = {};
| ^~~~~~~~
make: *** [/usr/lib64/R/etc/Makeconf:177: test-runner.o] Error 1
ERROR: compilation failed for package ‘CancerInSilico’
* removing ‘/build/r-cancerinsilico/src/CancerInSilico’
* restoring previous ‘/build/r-cancerinsilico/src/CancerInSilico’
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
| True | [MAINTAIN] r-cancerinsilico - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
g++ -std=gnu++14 -I"/usr/include/R/" -DNDEBUG -I'/usr/lib/R/library/Rcpp/include' -I'/usr/lib/R/library/BH/include' -I/usr/local/include -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -Wp,-D_GLIBCXX_ASSERTIONS -flto=auto -c Tests/Core/test-SquareLattice.cpp -o Tests/Core/test-SquareLattice.o
In file included from /usr/include/signal.h:328,
from Tests/catch.h:6450,
from test-runner.cpp:2:
Tests/catch.h:6473:33: error: size of array ‘altStackMem’ is not an integral constant-expression
6473 | static char altStackMem[SIGSTKSZ];
| ^~~~~~~~
Tests/catch.h:6524:45: error: size of array ‘altStackMem’ is not an integral constant-expression
6524 | char FatalConditionHandler::altStackMem[SIGSTKSZ] = {};
| ^~~~~~~~
make: *** [/usr/lib64/R/etc/Makeconf:177: test-runner.o] Error 1
ERROR: compilation failed for package ‘CancerInSilico’
* removing ‘/build/r-cancerinsilico/src/CancerInSilico’
* restoring previous ‘/build/r-cancerinsilico/src/CancerInSilico’
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
| main | r cancerinsilico please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug g std gnu i usr include r dndebug i usr lib r library rcpp include i usr lib r library bh include i usr local include fpic march mtune generic pipe fno plt fexceptions wp d fortify source wformat werror format security fstack clash protection fcf protection wp d glibcxx assertions flto auto c tests core test squarelattice cpp o tests core test squarelattice o in file included from usr include signal h from tests catch h from test runner cpp tests catch h error size of array ‘altstackmem’ is not an integral constant expression static char altstackmem tests catch h error size of array ‘altstackmem’ is not an integral constant expression char fatalconditionhandler altstackmem make error error compilation failed for package ‘cancerinsilico’ removing ‘ build r cancerinsilico src cancerinsilico’ restoring previous ‘ build r cancerinsilico src cancerinsilico’ packages please complete the following information package name description add any other context about the problem here | 1 |
5,580 | 27,958,345,405 | IssuesEvent | 2023-03-24 13:59:38 | software-mansion/react-native-reanimated | https://api.github.com/repos/software-mansion/react-native-reanimated | closed | ☂️ Deadlock/ANR in performOperations | Platform: Android Platform: iOS Bug Maintainer issue | ### Description
This is an umbrella issue for ANRs/deadlocks on Android/iOS in NodesManager.performOperations.
The bug was introduced in #1215.
Android:
- #2251
- #3062
iOS:
- #3180
- #3862
- #3946
PRs trying to solve this issue:
- #3082
- #3194
### Repro
We don't have a repro yet but it needs to use modal or datetime picker as well as animate layout props using Reanimated.
### Reanimated version
\>= 2.0.0, >= 3.0.0
### Platforms
Android, iOS | True | ☂️ Deadlock/ANR in performOperations - ### Description
This is an umbrella issue for ANRs/deadlocks on Android/iOS in NodesManager.performOperations.
The bug was introduced in #1215.
Android:
- #2251
- #3062
iOS:
- #3180
- #3862
- #3946
PRs trying to solve this issue:
- #3082
- #3194
### Repro
We don't have a repro yet but it needs to use modal or datetime picker as well as animate layout props using Reanimated.
### Reanimated version
\>= 2.0.0, >= 3.0.0
### Platforms
Android, iOS | main | ☂️ deadlock anr in performoperations description this is an umbrella issue for anrs deadlocks on android ios in nodesmanager performoperations the bug was introduced in android ios prs trying to solve this issue repro we don t have a repro yet but it needs to use modal or datetime picker as well as animate layout props using reanimated reanimated version platforms android ios | 1 |
715,714 | 24,607,525,308 | IssuesEvent | 2022-10-14 17:43:36 | duckduckgo/p5-app-duckpan | https://api.github.com/repos/duckduckgo/p5-app-duckpan | closed | Add support for new 404 Fallback in DDG::Rewrite | Improvement Priority: Medium | See https://github.com/duckduckgo/duckduckgo/pull/157 for details.
We need to check if `error_fallback` is defined, and if so, create and wrap an appropriate response in Web.pm.
/cc @zachthompson
| 1.0 | Add support for new 404 Fallback in DDG::Rewrite - See https://github.com/duckduckgo/duckduckgo/pull/157 for details.
We need to check if `error_fallback` is defined, and if so, create and wrap an appropriate response in Web.pm.
/cc @zachthompson
| non_main | add support for new fallback in ddg rewrite see for details we need to check if error fallback is defined and if so create and wrap an appropriate response in web pm cc zachthompson | 0 |
5,122 | 26,111,106,561 | IssuesEvent | 2022-12-27 20:03:10 | microsoft/DirectXTex | https://api.github.com/repos/microsoft/DirectXTex | closed | dec2022 causes _ITERATOR_DEBUG_LEVEL errors on ninja builds | maintainence | Using `dec2022`, this will break non `MSVC` builds with an `_ITERATOR_DEBUG_LEVEL` error.
```
lld-link: error: /failifmismatch: mismatch detected for '_ITERATOR_DEBUG_LEVEL':
>>> XXXX.lib(XXXX.cxx.obj) has value 0
>>> DirectXTex.lib(DirectXTexCompress.cpp.obj) has value 2
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
```
Got this error using `ninja` and VS 17.4.3 using the `clang` toolchain
This change to `CMakeList.txt` did trigger it:
```
foreach(t IN LISTS TOOL_EXES ITEMS ${PROJECT_NAME})
target_compile_definitions(${t} PRIVATE $<IF:$<CONFIG:DEBUG>,_DEBUG,NDEBUG>)
endforeach()
```
| True | dec2022 causes _ITERATOR_DEBUG_LEVEL errors on ninja builds - Using `dec2022`, this will break non `MSVC` builds with an `_ITERATOR_DEBUG_LEVEL` error.
```
lld-link: error: /failifmismatch: mismatch detected for '_ITERATOR_DEBUG_LEVEL':
>>> XXXX.lib(XXXX.cxx.obj) has value 0
>>> DirectXTex.lib(DirectXTexCompress.cpp.obj) has value 2
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
```
Got this error using `ninja` and VS 17.4.3 using the `clang` toolchain
This change to `CMakeList.txt` did trigger it:
```
foreach(t IN LISTS TOOL_EXES ITEMS ${PROJECT_NAME})
target_compile_definitions(${t} PRIVATE $<IF:$<CONFIG:DEBUG>,_DEBUG,NDEBUG>)
endforeach()
```
| main | causes iterator debug level errors on ninja builds using this will break non msvc builds with an iterator debug level error lld link error failifmismatch mismatch detected for iterator debug level xxxx lib xxxx cxx obj has value directxtex lib directxtexcompress cpp obj has value clang error linker command failed with exit code use v to see invocation got this error using ninja and vs using the clang toolchain this change to cmakelist txt did trigger it foreach t in lists tool exes items project name target compile definitions t private debug ndebug endforeach | 1 |
9,116 | 2,615,132,188 | IssuesEvent | 2015-03-01 06:02:21 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | NoSuchMethodError com.google.api.client.util.Lists.newArrayList | auto-migrated Priority-Medium Type-Defect | ```
Version of google-api-java-client (e.g. 1.5.0-beta)?
1.14.1-beta
Java environment (e.g. Java 6, Android 2.3, App Engine)?
App Engine SDK 1.7.2
Describe the problem.
When deployed I see this error. Works fine on local dev version.
I have added google-http-client-1.14.1-beta.jar (and other dependencies) to the
lib folder and added to build path.
java.lang.NoSuchMethodError:
com.google.api.client.util.Lists.newArrayList(Ljava/lang/Iterable;)Ljava/util/Ar
rayList;
How would you expect it to be fixed?
```
Original issue reported on code.google.com by `dam...@aeromac.com` on 10 Apr 2013 at 1:44 | 1.0 | NoSuchMethodError com.google.api.client.util.Lists.newArrayList - ```
Version of google-api-java-client (e.g. 1.5.0-beta)?
1.14.1-beta
Java environment (e.g. Java 6, Android 2.3, App Engine)?
App Engine SDK 1.7.2
Describe the problem.
When deployed I see this error. Works fine on local dev version.
I have added google-http-client-1.14.1-beta.jar (and other dependencies) to the
lib folder and added to build path.
java.lang.NoSuchMethodError:
com.google.api.client.util.Lists.newArrayList(Ljava/lang/Iterable;)Ljava/util/Ar
rayList;
How would you expect it to be fixed?
```
Original issue reported on code.google.com by `dam...@aeromac.com` on 10 Apr 2013 at 1:44 | non_main | nosuchmethoderror com google api client util lists newarraylist version of google api java client e g beta beta java environment e g java android app engine app engine sdk describe the problem when deployed i see this error works fine on local dev version i have added google http client beta jar and other dependencies to the lib folder and added to build path java lang nosuchmethoderror com google api client util lists newarraylist ljava lang iterable ljava util ar raylist how would you expect it to be fixed original issue reported on code google com by dam aeromac com on apr at | 0 |
782,290 | 27,492,503,322 | IssuesEvent | 2023-03-04 19:55:06 | concretecms/concretecms | https://api.github.com/repos/concretecms/concretecms | closed | Tabs of areas may overlap layout column tabs (8.4.0RC1) | Type:Bug Status:Available Bug Priority:Low Product Areas:In-Page Editing | Let's say i have a layout with only one column, located at the bottom of the main area. Now when i want to hover the label (tab) of that column in order to have the context menu for editing the layout appearing, this is not possible, because the label (tab) of the layout column is immediately getting overlapped by the area's label as soon as the mouse pointer is hovering. This is because when moving the pointer to the bottom of the column it also reaches the bottom of the area, which makes the area's label appear instead of the column's label.
So in this special case there is no way to edit that layout because both labels/tabs (area and column) are located exactly on the same place.
A solution would be some offset between the two. Or, why not have all labels of layout columns aligned right, while keeping the area labels left, as they are now?


| 1.0 | Tabs of areas may overlap layout column tabs (8.4.0RC1) - Let's say i have a layout with only one column, located at the bottom of the main area. Now when i want to hover the label (tab) of that column in order to have the context menu for editing the layout appearing, this is not possible, because the label (tab) of the layout column is immediately getting overlapped by the area's label as soon as the mouse pointer is hovering. This is because when moving the pointer to the bottom of the column it also reaches the bottom of the area, which makes the area's label appear instead of the column's label.
So in this special case there is no way to edit that layout because both labels/tabs (area and column) are located exactly on the same place.
A solution would be some offset between the two. Or, why not have all labels of layout columns aligned right, while keeping the area labels left, as they are now?


| non_main | tabs of areas may overlap layout column tabs let s say i have a layout with only one column located at the bottom of the main area now when i want to hover the label tab of that column in order to have the context menu for editing the layout appearing this is not possible because the label tab of the layout column is immediately getting overlapped by the area s label as soon as the mouse pointer is hovering this is because when moving the pointer to the bottom of the column it also reaches the bottom of the area which makes the area s label appear instead of the column s label so in this special case there is no way to edit that layout because both labels tabs area and column are located exactly on the same place a solution would be some offset between the two or why not have all labels of layout columns aligned right while keeping the area labels left as they are now | 0 |
5,563 | 27,825,392,462 | IssuesEvent | 2023-03-19 17:59:01 | p-j-smith/lipyphilic | https://api.github.com/repos/p-j-smith/lipyphilic | closed | Use the new MDAnalysis Results class to store analysis results | maintainance | **Is your feature request related to a problem? Please describe.**
MDAnalysis has a new `Results` class that is used to store the output each of the analysis classes. This common API is designed to make it easier for the [MDA CLI](https://github.com/PicoCentauri/mda_cli) to extract the results of an analysis.
**Describe the solution you'd like**
Update all `lipyphilic` analysis classes to store results in a `.results` attribute using the `Results` class.
**Describe alternatives you've considered**
Don't update to the new `Results` class. However, if we update, it looks like it would be relatively straightforward to piggyback on the MDA CLI to create one for `lipyphilic`.
**Additional context**
See https://github.com/MDAnalysis/mdanalysis/pull/3261
| True | Use the new MDAnalysis Results class to store analysis results - **Is your feature request related to a problem? Please describe.**
MDAnalysis has a new `Results` class that is used to store the output each of the analysis classes. This common API is designed to make it easier for the [MDA CLI](https://github.com/PicoCentauri/mda_cli) to extract the results of an analysis.
**Describe the solution you'd like**
Update all `lipyphilic` analysis classes to store results in a `.results` attribute using the `Results` class.
**Describe alternatives you've considered**
Don't update to the new `Results` class. However, if we update, it looks like it would be relatively straightforward to piggyback on the MDA CLI to create one for `lipyphilic`.
**Additional context**
See https://github.com/MDAnalysis/mdanalysis/pull/3261
| main | use the new mdanalysis results class to store analysis results is your feature request related to a problem please describe mdanalysis has a new results class that is used to store the output each of the analysis classes this common api is designed to make it easier for the to extract the results of an analysis describe the solution you d like update all lipyphilic analysis classes to store results in a results attribute using the results class describe alternatives you ve considered don t update to the new results class however if we update it looks like it would be relatively straightforward to piggyback on the mda cli to create one for lipyphilic additional context see | 1 |
5,591 | 28,013,926,080 | IssuesEvent | 2023-03-27 20:51:36 | centerofci/mathesar-website | https://api.github.com/repos/centerofci/mathesar-website | closed | Create Sponsors page on website | restricted: maintainers status: ready type: enhancement work: frontend | We need to create a "Sponsors" page on the website.
- [x] Add a top level item called "Sponsors" to the nav.
- [x] Remove "Home" from the nav to create space – we don't need it, people can click on the logo to go to the homepage.
- [x] Show current sponsors (in the website tier)
- [x] Thingylabs GmbH, https://www.thingylabs.io/, https://www.thingylabs.io/logo.png
- [ ] Show links for people to sponsor us on GitHub and Open Collective - make these visually interesting.
### Reference
- See this README PR for inspiration https://github.com/centerofci/mathesar/pull/2710 | True | Create Sponsors page on website - We need to create a "Sponsors" page on the website.
- [x] Add a top level item called "Sponsors" to the nav.
- [x] Remove "Home" from the nav to create space – we don't need it, people can click on the logo to go to the homepage.
- [x] Show current sponsors (in the website tier)
- [x] Thingylabs GmbH, https://www.thingylabs.io/, https://www.thingylabs.io/logo.png
- [ ] Show links for people to sponsor us on GitHub and Open Collective - make these visually interesting.
### Reference
- See this README PR for inspiration https://github.com/centerofci/mathesar/pull/2710 | main | create sponsors page on website we need to create a sponsors page on the website add a top level item called sponsors to the nav remove home from the nav to create space – we don t need it people can click on the logo to go to the homepage show current sponsors in the website tier thingylabs gmbh show links for people to sponsor us on github and open collective make these visually interesting reference see this readme pr for inspiration | 1 |
1,206 | 5,146,081,372 | IssuesEvent | 2017-01-12 23:38:59 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Failure while using htpasswd module | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
htpasswd
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Default
##### OS / ENVIRONMENT
ArchLinux
##### SUMMARY
htpasswd module fails with message: `invalid version number '1.7.0.post20161124160753`
Looks like it's related to `python2-passlib` package (installed from archlinux repository).
##### STEPS TO REPRODUCE
Using a role with a task like below
```
htpasswd:
path=/etc/app/auth/htpasswd
name=someuser
crypt_scheme=bcrypt
password={{ password }}
owner=root
mode=0640
```
##### EXPECTED RESULTS
User entry added to htpasswd file.
##### ACTUAL RESULTS
Task failure.
<!--- Paste verbatim command output between quotes below -->
```
fatal: [host]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"backup": null,
"content": null,
"create": true,
"crypt_scheme": "bcrypt",
"delimiter": null,
"directory_mode": null,
"follow": false,
"force": null,
"group": null,
"mode": "0640",
"name": "someuser",
"owner": "root",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"path": "/etc/app/auth/htpasswd",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"unsafe_writes": null
},
"module_name": "htpasswd"
},
"msg": "invalid version number '1.7.0.post20161124160753'"
}
```
| True | Failure while using htpasswd module - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
htpasswd
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Default
##### OS / ENVIRONMENT
ArchLinux
##### SUMMARY
htpasswd module fails with message: `invalid version number '1.7.0.post20161124160753`
Looks like it's related to `python2-passlib` package (installed from archlinux repository).
##### STEPS TO REPRODUCE
Using a role with a task like below
```
htpasswd:
path=/etc/app/auth/htpasswd
name=someuser
crypt_scheme=bcrypt
password={{ password }}
owner=root
mode=0640
```
##### EXPECTED RESULTS
User entry added to htpasswd file.
##### ACTUAL RESULTS
Task failure.
<!--- Paste verbatim command output between quotes below -->
```
fatal: [host]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"backup": null,
"content": null,
"create": true,
"crypt_scheme": "bcrypt",
"delimiter": null,
"directory_mode": null,
"follow": false,
"force": null,
"group": null,
"mode": "0640",
"name": "someuser",
"owner": "root",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"path": "/etc/app/auth/htpasswd",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"unsafe_writes": null
},
"module_name": "htpasswd"
},
"msg": "invalid version number '1.7.0.post20161124160753'"
}
```
| main | failure while using htpasswd module issue type bug report component name htpasswd ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default os environment archlinux summary htpasswd module fails with message invalid version number looks like it s related to passlib package installed from archlinux repository steps to reproduce using a role with a task like below htpasswd path etc app auth htpasswd name someuser crypt scheme bcrypt password password owner root mode expected results user entry added to htpasswd file actual results task failure fatal failed changed false failed true invocation module args backup null content null create true crypt scheme bcrypt delimiter null directory mode null follow false force null group null mode name someuser owner root password value specified in no log parameter path etc app auth htpasswd regexp null remote src null selevel null serole null setype null seuser null src null state present unsafe writes null module name htpasswd msg invalid version number | 1 |
3,285 | 12,541,383,596 | IssuesEvent | 2020-06-05 12:14:47 | laminas/laminas-servicemanager | https://api.github.com/repos/laminas/laminas-servicemanager | closed | remove redundant isset(s) | Awaiting Maintainer Response Enhancement | when we are also testing with `! empty()` I believe we can skip `isset()` checks, as the execution speed is nearly the same (but it will double for positive isset)
---
Originally posted by @pine3ree at https://github.com/zendframework/zend-servicemanager/pull/262 | True | remove redundant isset(s) - when we are also testing with `! empty()` I believe we can skip `isset()` checks, as the execution speed is nearly the same (but it will double for positive isset)
---
Originally posted by @pine3ree at https://github.com/zendframework/zend-servicemanager/pull/262 | main | remove redundant isset s when we are also testing with empty i believe we can skip isset checks as the execution speed is nearly the same but it will double for positive isset originally posted by at | 1 |
1,192 | 5,104,027,914 | IssuesEvent | 2017-01-04 23:23:27 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Maps use var edited variations of the same turfs to change their appearance | Maintainability/Hinders improvements Not a bug | Maps use var edited variations of the same turfs to change their appearance instead of the proper turf types.
Huge maintainability issue.
| True | Maps use var edited variations of the same turfs to change their appearance - Maps use var edited variations of the same turfs to change their appearance instead of the proper turf types.
Huge maintainability issue.
| main | maps use var edited variations of the same turfs to change their appearance maps use var edited variations of the same turfs to change their appearance instead of the proper turf types huge maintainability issue | 1 |
4,306 | 21,699,125,248 | IssuesEvent | 2022-05-10 00:40:25 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | closed | [MAINTAIN] r-affypara | maintain | <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
* installing *source* package ‘affyPara’ ...
** using staged installation
** R
** byte-compile and prepare package for lazy loading
Warning message:
no DISPLAY variable so Tk is not available
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
Warning: no DISPLAY variable so Tk is not available
Error: package or namespace load failed for ‘affyPara’:
.onLoad failed in loadNamespace() for 'affyPara', details:
call: assign(".affyParaInternalEnv", .affyParaInternalEnv, envir = topenv(parent.frame()))
error: cannot add binding of '.affyParaInternalEnv' to the base environment
Error: loading failed
Execution halted
ERROR: loading failed
```
</details>
**Packages (please complete the following information):**
- Package Name: r-affypara
**Description**
Add any other context about the problem here.
| True | [MAINTAIN] r-affypara - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
* installing *source* package ‘affyPara’ ...
** using staged installation
** R
** byte-compile and prepare package for lazy loading
Warning message:
no DISPLAY variable so Tk is not available
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
Warning: no DISPLAY variable so Tk is not available
Error: package or namespace load failed for ‘affyPara’:
.onLoad failed in loadNamespace() for 'affyPara', details:
call: assign(".affyParaInternalEnv", .affyParaInternalEnv, envir = topenv(parent.frame()))
error: cannot add binding of '.affyParaInternalEnv' to the base environment
Error: loading failed
Execution halted
ERROR: loading failed
```
</details>
**Packages (please complete the following information):**
- Package Name: r-affypara
**Description**
Add any other context about the problem here.
| main | r affypara please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug installing source package ‘affypara’ using staged installation r byte compile and prepare package for lazy loading warning message no display variable so tk is not available help installing help indices building package indices installing vignettes testing if installed package can be loaded from temporary location warning no display variable so tk is not available error package or namespace load failed for ‘affypara’ onload failed in loadnamespace for affypara details call assign affyparainternalenv affyparainternalenv envir topenv parent frame error cannot add binding of affyparainternalenv to the base environment error loading failed execution halted error loading failed packages please complete the following information package name r affypara description add any other context about the problem here | 1 |
365 | 3,343,870,404 | IssuesEvent | 2015-11-15 20:58:48 | jenkinsci/slack-plugin | https://api.github.com/repos/jenkinsci/slack-plugin | opened | Release slack plugin 1.8.1 | maintainer communication | This issue is to track progress of releasing slack plugin 1.8.1.
TODO:
- [ ] Configure your credentials in `~/.m2/settings.xml`. (outlined in [making a
new release][plugin-release] doc)
- [ ] Create a new issue to track the release and give it the label `maintainer communication`.
- [ ] Create a release branch. `git checkout origin/master -b prepare_release`
- [ ] Update the release notes in `CHANGELOG.md`.
- [ ] Open a pull request from `prepare_release` branch to `master` branch. Merge it.
- [ ] Fetch the latest `master`.
- [ ] Execute the release plugin.
```
mvn org.apache.maven.plugins:maven-release-plugin:2.5:prepare org.apache.maven.plugins:maven-release-plugin:2.5:perform
```
I pin which version of the release plugin to use because of the working around common issues section of the [release document][plugin-release].
[plugin-release]: https://wiki.jenkins-ci.org/display/JENKINS/Hosting+Plugins
| True | Release slack plugin 1.8.1 - This issue is to track progress of releasing slack plugin 1.8.1.
TODO:
- [ ] Configure your credentials in `~/.m2/settings.xml`. (outlined in [making a
new release][plugin-release] doc)
- [ ] Create a new issue to track the release and give it the label `maintainer communication`.
- [ ] Create a release branch. `git checkout origin/master -b prepare_release`
- [ ] Update the release notes in `CHANGELOG.md`.
- [ ] Open a pull request from `prepare_release` branch to `master` branch. Merge it.
- [ ] Fetch the latest `master`.
- [ ] Execute the release plugin.
```
mvn org.apache.maven.plugins:maven-release-plugin:2.5:prepare org.apache.maven.plugins:maven-release-plugin:2.5:perform
```
I pin which version of the release plugin to use because of the working around common issues section of the [release document][plugin-release].
[plugin-release]: https://wiki.jenkins-ci.org/display/JENKINS/Hosting+Plugins
| main | release slack plugin this issue is to track progress of releasing slack plugin todo configure your credentials in settings xml outlined in making a new release doc create a new issue to track the release and give it the label maintainer communication create a release branch git checkout origin master b prepare release update the release notes in changelog md open a pull request from prepare release branch to master branch merge it fetch the latest master execute the release plugin mvn org apache maven plugins maven release plugin prepare org apache maven plugins maven release plugin perform i pin which version of the release plugin to use because of the working around common issues section of the | 1 |
2,957 | 10,616,404,401 | IssuesEvent | 2019-10-12 11:27:39 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | closed | Prevent `go.mod` file pollution with development dependencies | context-workflow scope-compatibility scope-maintainability scope-stability type-improvement | Currently when installing development dependencies through _mage_, the `go.mod` file will be updated to include the installed packages since this the default behavior of the `go get` command when running in _module_ mode.
To prevent the pollution of the project's Go module definition the _module_ mode should be disabled when installing the dev/build packages.
This is a necessary workaround until the Go toolchain is able to install packages globally without updating the module file when the `go get` command is run from within the project root directory.
See https://github.com/golang/go/issues/30515 for more details and proposed solutions that might be added to Go's build tools in future versions.
| True | Prevent `go.mod` file pollution with development dependencies - Currently when installing development dependencies through _mage_, the `go.mod` file will be updated to include the installed packages since this the default behavior of the `go get` command when running in _module_ mode.
To prevent the pollution of the project's Go module definition the _module_ mode should be disabled when installing the dev/build packages.
This is a necessary workaround until the Go toolchain is able to install packages globally without updating the module file when the `go get` command is run from within the project root directory.
See https://github.com/golang/go/issues/30515 for more details and proposed solutions that might be added to Go's build tools in future versions.
| main | prevent go mod file pollution with development dependencies currently when installing development dependencies through mage the go mod file will be updated to include the installed packages since this the default behavior of the go get command when running in module mode to prevent the pollution of the project s go module definition the module mode should be disabled when installing the dev build packages this is a necessary workaround until the go toolchain is able to install packages globally without updating the module file when the go get command is run from within the project root directory see for more details and proposed solutions that might be added to go s build tools in future versions | 1 |
539,776 | 15,794,738,232 | IssuesEvent | 2021-04-02 11:40:38 | Edgeryders-Participio/realities | https://api.github.com/repos/Edgeryders-Participio/realities | closed | Add simple authentication on an instance level | Priority: 3 (later) api ui | An instance admin should be able to configure so that only someone with an account can view and edit the instance
A much simpler version of https://github.com/Edgeryders-Participio/realities/issues/182 (we should still do that one, but this one has a bigger effect since it makes it easier for external orgs to try it out and feel safe) | 1.0 | Add simple authentication on an instance level - An instance admin should be able to configure so that only someone with an account can view and edit the instance
A much simpler version of https://github.com/Edgeryders-Participio/realities/issues/182 (we should still do that one, but this one has a bigger effect since it makes it easier for external orgs to try it out and feel safe) | non_main | add simple authentication on an instance level an instance admin should be able to configure so that only someone with an account can view and edit the instance a much simpler version of we should still do that one but this one has a bigger effect since it makes it easier for external orgs to try it out and feel safe | 0 |
96,279 | 19,978,216,004 | IssuesEvent | 2022-01-29 13:01:57 | Gdsc-Lbce/GDSC_LBCE_Website | https://api.github.com/repos/Gdsc-Lbce/GDSC_LBCE_Website | closed | Create an Error 404 page for the website | enhancement Winter Of Code 2.0 | Hey @ashish-ad , I feel we should add an error 404 page for the website. I you feel the issue is beneficial for the project, please Assign this issue to me under WoC 2.0. Thank You. | 1.0 | Create an Error 404 page for the website - Hey @ashish-ad , I feel we should add an error 404 page for the website. I you feel the issue is beneficial for the project, please Assign this issue to me under WoC 2.0. Thank You. | non_main | create an error page for the website hey ashish ad i feel we should add an error page for the website i you feel the issue is beneficial for the project please assign this issue to me under woc thank you | 0 |
121,089 | 12,104,215,158 | IssuesEvent | 2020-04-20 19:48:21 | UGS-GIO/geochron | https://api.github.com/repos/UGS-GIO/geochron | opened | Create database design/schema document | documentation | Product Manager (Steve B. & Gordon) and Martha need to set up database together.
Things to include in document:
- [ ] Database design
- [ ] Database name
- [ ] Relates/Joins
- [ ] Field names
- [ ] Field types
- [ ] Domains (if any)
- [ ] Permissions
- [ ] Update schedule
Get final sign-off by Martha & post document to github.
| 1.0 | Create database design/schema document - Product Manager (Steve B. & Gordon) and Martha need to set up database together.
Things to include in document:
- [ ] Database design
- [ ] Database name
- [ ] Relates/Joins
- [ ] Field names
- [ ] Field types
- [ ] Domains (if any)
- [ ] Permissions
- [ ] Update schedule
Get final sign-off by Martha & post document to github.
| non_main | create database design schema document product manager steve b gordon and martha need to set up database together things to include in document database design database name relates joins field names field types domains if any permissions update schedule get final sign off by martha post document to github | 0 |
5,801 | 30,719,754,656 | IssuesEvent | 2023-07-27 15:09:41 | ipfs/helia | https://api.github.com/repos/ipfs/helia | closed | Efficient directory import | dif/easy effort/hours kind/enhancement need/maintainers-input | The js-IPFS [glob source](https://github.com/ipfs/js-ipfs-utils/blob/master/src/files/glob-source.js) should be split out of js-ipfs-utils and published for use with Helia. | True | Efficient directory import - The js-IPFS [glob source](https://github.com/ipfs/js-ipfs-utils/blob/master/src/files/glob-source.js) should be split out of js-ipfs-utils and published for use with Helia. | main | efficient directory import the js ipfs should be split out of js ipfs utils and published for use with helia | 1 |
1,343 | 5,721,601,970 | IssuesEvent | 2017-04-20 07:10:53 | tomchentw/react-google-maps | https://api.github.com/repos/tomchentw/react-google-maps | closed | Release new version on NPM | CALL_FOR_MAINTAINERS | The latest version in NPM is 4.11.0, which seems to be a more than a few versions behind.
@tomchentw could you publish the latest stable version to NPM?
I believe either v5.1.1 or v6.0.1 is the latest stable version. | True | Release new version on NPM - The latest version in NPM is 4.11.0, which seems to be a more than a few versions behind.
@tomchentw could you publish the latest stable version to NPM?
I believe either v5.1.1 or v6.0.1 is the latest stable version. | main | release new version on npm the latest version in npm is which seems to be a more than a few versions behind tomchentw could you publish the latest stable version to npm i believe either or is the latest stable version | 1 |
3,131 | 12,015,058,247 | IssuesEvent | 2020-04-10 13:07:01 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | closed | MAINT: unreachable code path in align.py | Component-Analysis maintainability | There's an unreachable code path in `analysis/align.py`. For details, see #2598 and in particular the extended discussion in #2652, concerning `get_matching_atoms` strict diagnostics (approximate lines: 1298-1323). I'll paste the block below. There are a few AST parsers that can help catch these things sometimes. NumPy uses [`vulture`](https://github.com/jendrikseipp/vulture) to catch dead code paths in CI for example, though I think this one somehow sneaks past vulture because I checked last night.
```python
if np.any(mismatch_mask):
if strict:
# diagnostics
mismatch_resindex = np.arange(ag1.n_residues)[mismatch_mask]
def log_mismatch(
number,
ag,
rsize,
mismatch_resindex=mismatch_resindex):
logger.error("Offending residues: group {0}: {1}".format(
number,
", ".join(["{0[0]}{0[1]} ({0[2]})".format(r) for r in
zip(ag.resnames[mismatch_resindex],
ag.resids[mismatch_resindex],
rsize[mismatch_resindex]
)])))
logger.error("Found {0} residues with non-matching numbers of atoms (#)".format(
mismatch_mask.sum()))
log_mismatch(1, ag1, rsize1)
log_mismatch(2, ag2, rsize2)
errmsg = ("Different number of atoms in some residues. "
"(Use strict=False to attempt using matching atoms only.)")
logger.error(errmsg)
raise SelectionError(errmsg)
``` | True | MAINT: unreachable code path in align.py - There's an unreachable code path in `analysis/align.py`. For details, see #2598 and in particular the extended discussion in #2652, concerning `get_matching_atoms` strict diagnostics (approximate lines: 1298-1323). I'll paste the block below. There are a few AST parsers that can help catch these things sometimes. NumPy uses [`vulture`](https://github.com/jendrikseipp/vulture) to catch dead code paths in CI for example, though I think this one somehow sneaks past vulture because I checked last night.
```python
if np.any(mismatch_mask):
if strict:
# diagnostics
mismatch_resindex = np.arange(ag1.n_residues)[mismatch_mask]
def log_mismatch(
number,
ag,
rsize,
mismatch_resindex=mismatch_resindex):
logger.error("Offending residues: group {0}: {1}".format(
number,
", ".join(["{0[0]}{0[1]} ({0[2]})".format(r) for r in
zip(ag.resnames[mismatch_resindex],
ag.resids[mismatch_resindex],
rsize[mismatch_resindex]
)])))
logger.error("Found {0} residues with non-matching numbers of atoms (#)".format(
mismatch_mask.sum()))
log_mismatch(1, ag1, rsize1)
log_mismatch(2, ag2, rsize2)
errmsg = ("Different number of atoms in some residues. "
"(Use strict=False to attempt using matching atoms only.)")
logger.error(errmsg)
raise SelectionError(errmsg)
``` | main | maint unreachable code path in align py there s an unreachable code path in analysis align py for details see and in particular the extended discussion in concerning get matching atoms strict diagnostics approximate lines i ll paste the block below there are a few ast parsers that can help catch these things sometimes numpy uses to catch dead code paths in ci for example though i think this one somehow sneaks past vulture because i checked last night python if np any mismatch mask if strict diagnostics mismatch resindex np arange n residues def log mismatch number ag rsize mismatch resindex mismatch resindex logger error offending residues group format number join format r for r in zip ag resnames ag resids rsize logger error found residues with non matching numbers of atoms format mismatch mask sum log mismatch log mismatch errmsg different number of atoms in some residues use strict false to attempt using matching atoms only logger error errmsg raise selectionerror errmsg | 1 |
111,469 | 14,100,179,210 | IssuesEvent | 2020-11-06 03:25:40 | jaypeasee/overlook-hotel | https://api.github.com/repos/jaypeasee/overlook-hotel | closed | Display available rooms for Guest | design/accessibility new feature | ### As a user, when I pick a date in the calendar on the nav:
1. I should be able to see all available bookings for that day. If there are none.
1. I should be able to pick an available room.
1. I should see the main section title update.
1. If there are no availabilities, the Main section should show that.
1. If I pick a date in the past, the Main section should show that. | 1.0 | Display available rooms for Guest - ### As a user, when I pick a date in the calendar on the nav:
1. I should be able to see all available bookings for that day. If there are none.
1. I should be able to pick an available room.
1. I should see the main section title update.
1. If there are no availabilities, the Main section should show that.
1. If I pick a date in the past, the Main section should show that. | non_main | display available rooms for guest as a user when i pick a date in the calendar on the nav i should be able to see all available bookings for that day if there are none i should be able to pick an available room i should see the main section title update if there are no availabilities the main section should show that if i pick a date in the past the main section should show that | 0 |
4,913 | 25,259,633,578 | IssuesEvent | 2022-11-15 21:26:07 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Format Python code with the `black` formatter | engineering Maintain | # Description
Reading inconsistently formatted code requires more mental energy and can even be confusing.
Formatting code consistently can be laborious.
Checking code formatting manually is even worse.
Discussions about code formatting are a time sink.
Linter can check formatting rules and formatters can format code automatically.
`black` is an opinionated, PEP8 compliant code formatter and linter for Python.
We can set up `black` to check and format all our Python code consistently.
# Acceptance criteria
- [x] `black` is added as a dependency and configured in the repo.
- [x] There is an `inv` command to check the code base for formatting issues.
- [x] There is an `inv` command to fix the formatting issues in the code base.
- [x] CI checks that all Python code in the code base is correctly formatted with `black`.
- [x] All Python code is formatted with `black`. | True | Format Python code with the `black` formatter - # Description
Reading inconsistently formatted code requires more mental energy and can even be confusing.
Formatting code consistently can be laborious.
Checking code formatting manually is even worse.
Discussions about code formatting are a time sink.
Linter can check formatting rules and formatters can format code automatically.
`black` is an opinionated, PEP8 compliant code formatter and linter for Python.
We can set up `black` to check and format all our Python code consistently.
# Acceptance criteria
- [x] `black` is added as a dependency and configured in the repo.
- [x] There is an `inv` command to check the code base for formatting issues.
- [x] There is an `inv` command to fix the formatting issues in the code base.
- [x] CI checks that all Python code in the code base is correctly formatted with `black`.
- [x] All Python code is formatted with `black`. | main | format python code with the black formatter description reading inconsistently formatted code requires more mental energy and can even be confusing formatting code consistently can be laborious checking code formatting manually is even worse discussions about code formatting are a time sink linter can check formatting rules and formatters can format code automatically black is an opinionated compliant code formatter and linter for python we can set up black to check and format all our python code consistently acceptance criteria black is added as a dependency and configured in the repo there is an inv command to check the code base for formatting issues there is an inv command to fix the formatting issues in the code base ci checks that all python code in the code base is correctly formatted with black all python code is formatted with black | 1 |
1,724 | 6,574,505,992 | IssuesEvent | 2017-09-11 13:08:36 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | yum list installed doesn't show source repo | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- yum
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /home/ak/ansible/webservers/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ansible Host: CentOS7
Managed: CentOS7
##### SUMMARY
no repo name is provided for yum list=installed
##### STEPS TO REPRODUCE
```
tasks:
- name: yum list
yum: list=installed
register: output
- name: show all
debug: "msg={{ output.results }}"
```
##### EXPECTED RESULTS
```
},
{
"arch": "noarch",
"epoch": "0",
"name": "yum-utils",
"nevra": "0:yum-utils-1.1.31-34.el7.noarch",
"release": "34.el7",
"repo": "@base",
"version": "1.1.31",
"yumstate": "installed"
},
{
"arch": "x86_64",
"epoch": "0",
"name": "zlib",
"nevra": "0:zlib-1.2.7-15.el7.x86_64",
"release": "15.el7",
"repo": "@anaconda",
"version": "1.2.7",
"yumstate": "installed"
}
```
##### ACTUAL RESULTS
```
},
{
"arch": "noarch",
"epoch": "0",
"name": "yum-utils",
"nevra": "0:yum-utils-1.1.31-34.el7.noarch",
"release": "34.el7",
"repo": "installed",
"version": "1.1.31",
"yumstate": "installed"
},
{
"arch": "x86_64",
"epoch": "0",
"name": "zlib",
"nevra": "0:zlib-1.2.7-15.el7.x86_64",
"release": "15.el7",
"repo": "installed",
"version": "1.2.7",
"yumstate": "installed"
}
```
Also: compare with `yum list installed` in command-line:
```
yum-utils.noarch 1.1.31-34.el7 @base
zlib.x86_64 1.2.7-15.el7 @anaconda
```
| True | yum list installed doesn't show source repo - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- yum
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /home/ak/ansible/webservers/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ansible Host: CentOS7
Managed: CentOS7
##### SUMMARY
no repo name is provided for yum list=installed
##### STEPS TO REPRODUCE
```
tasks:
- name: yum list
yum: list=installed
register: output
- name: show all
debug: "msg={{ output.results }}"
```
##### EXPECTED RESULTS
```
},
{
"arch": "noarch",
"epoch": "0",
"name": "yum-utils",
"nevra": "0:yum-utils-1.1.31-34.el7.noarch",
"release": "34.el7",
"repo": "@base",
"version": "1.1.31",
"yumstate": "installed"
},
{
"arch": "x86_64",
"epoch": "0",
"name": "zlib",
"nevra": "0:zlib-1.2.7-15.el7.x86_64",
"release": "15.el7",
"repo": "@anaconda",
"version": "1.2.7",
"yumstate": "installed"
}
```
##### ACTUAL RESULTS
```
},
{
"arch": "noarch",
"epoch": "0",
"name": "yum-utils",
"nevra": "0:yum-utils-1.1.31-34.el7.noarch",
"release": "34.el7",
"repo": "installed",
"version": "1.1.31",
"yumstate": "installed"
},
{
"arch": "x86_64",
"epoch": "0",
"name": "zlib",
"nevra": "0:zlib-1.2.7-15.el7.x86_64",
"release": "15.el7",
"repo": "installed",
"version": "1.2.7",
"yumstate": "installed"
}
```
Also: compare with `yum list installed` in command-line:
```
yum-utils.noarch 1.1.31-34.el7 @base
zlib.x86_64 1.2.7-15.el7 @anaconda
```
| main | yum list installed doesn t show source repo issue type bug report component name yum ansible version ansible config file home ak ansible webservers ansible cfg configured module search path default w o overrides os environment ansible host managed summary no repo name is provided for yum list installed steps to reproduce tasks name yum list yum list installed register output name show all debug msg output results expected results arch noarch epoch name yum utils nevra yum utils noarch release repo base version yumstate installed arch epoch name zlib nevra zlib release repo anaconda version yumstate installed actual results arch noarch epoch name yum utils nevra yum utils noarch release repo installed version yumstate installed arch epoch name zlib nevra zlib release repo installed version yumstate installed also compare with yum list installed in command line yum utils noarch base zlib anaconda | 1 |
381,899 | 11,297,505,099 | IssuesEvent | 2020-01-17 06:15:56 | wso2/product-is | https://api.github.com/repos/wso2/product-is | opened | Wrong behavior on SCIM2 delete a user from the read only Ldap. | Affected/5.10.0-Alpha2 Component/SCIM Priority/High Severity/Major | When an admin user attempt to delete a user who is not existing the unique id read only Ldap user store, gets 404 error. This behavior is acceptable if we have write access to the user store.
Therefore this scenario behavior should change and response should be 406 since the delete methods is not allowed in read only user store.
**Request:** (user is invalid)
curl -v -k --user admin:admin -X DELETE https://localhost:9443/scim2/Users/2acbee42-3561-4ae6-bf89-23fc43da6353 -H "Accept: application/scim+json"
**Response:**
curl -v -k --user admin:admin -X DELETE https://localhost:9443/scim2/Users/b228b59d-db19-4064-b637-d33c31209fae -H "Accept: application/scim+json"
{"schemas":["urn:ietf:params:scim:api:messages:2.0:Error"],"detail":"Specified resource (e.g., User) or endpoint does not exist.","status":"404"}
| 1.0 | Wrong behavior on SCIM2 delete a user from the read only Ldap. - When an admin user attempt to delete a user who is not existing the unique id read only Ldap user store, gets 404 error. This behavior is acceptable if we have write access to the user store.
Therefore this scenario behavior should change and response should be 406 since the delete methods is not allowed in read only user store.
**Request:** (user is invalid)
curl -v -k --user admin:admin -X DELETE https://localhost:9443/scim2/Users/2acbee42-3561-4ae6-bf89-23fc43da6353 -H "Accept: application/scim+json"
**Response:**
curl -v -k --user admin:admin -X DELETE https://localhost:9443/scim2/Users/b228b59d-db19-4064-b637-d33c31209fae -H "Accept: application/scim+json"
{"schemas":["urn:ietf:params:scim:api:messages:2.0:Error"],"detail":"Specified resource (e.g., User) or endpoint does not exist.","status":"404"}
| non_main | wrong behavior on delete a user from the read only ldap when an admin user attempt to delete a user who is not existing the unique id read only ldap user store gets error this behavior is acceptable if we have write access to the user store therefore this scenario behavior should change and response should be since the delete methods is not allowed in read only user store request user is invalid curl v k user admin admin x delete h accept application scim json response curl v k user admin admin x delete h accept application scim json schemas detail specified resource e g user or endpoint does not exist status | 0 |
1,645 | 6,572,668,802 | IssuesEvent | 2017-09-11 04:15:13 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | apt module: Pass "--no-download" to apt-get | affects_2.0 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
apt
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu nodes
##### SUMMARY
Would it be possible to allow the "--no-download" option to be passed to apt-get?
##### STEPS TO REPRODUCE
For applications where the nodes do not have access to the Internet, i.e. large commercial, it would be useful to download the deb packages first, copy them to /var/apt/cache on each node and then run apt-get install --no-download.
Creating an internal Ubuntu mirror would be prohibitive because of the file transfer required (several hundreds of GB).
This can currently be accomplished using a shell command, but would be more elegant using the apt module.
##### EXPECTED RESULTS
##### ACTUAL RESULTS
| True | apt module: Pass "--no-download" to apt-get - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
apt
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu nodes
##### SUMMARY
Would it be possible to allow the "--no-download" option to be passed to apt-get?
##### STEPS TO REPRODUCE
For applications where the nodes do not have access to the Internet, i.e. large commercial, it would be useful to download the deb packages first, copy them to /var/apt/cache on each node and then run apt-get install --no-download.
Creating an internal Ubuntu mirror would be prohibitive because of the file transfer required (several hundreds of GB).
This can currently be accomplished using a shell command, but would be more elegant using the apt module.
##### EXPECTED RESULTS
##### ACTUAL RESULTS
| main | apt module pass no download to apt get issue type feature idea component name apt ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment ubuntu nodes summary would it be possible to allow the no download option to be passed to apt get steps to reproduce for applications where the nodes do not have access to the internet i e large commercial it would be useful to download the deb packages first copy them to var apt cache on each node and then run apt get install no download creating an internal ubuntu mirror would be prohibitive because of the file transfer required several hundreds of gb this can currently be accomplished using a shell command but would be more elegant using the apt module expected results actual results | 1 |
421,774 | 12,261,238,003 | IssuesEvent | 2020-05-06 19:43:47 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | When a keepalive ping is triggered, its watchdog is always fired | disposition/stale kind/bug lang/c++ priority/P2 | ### What version of gRPC and what language are you using?
We're using grpc 1.19.0, in C++.
### What operating system (Linux, Windows,...) and version?
We're using it on various versions of Windows. More specifically, i do reproduce it on Windows 10.
### What runtime / compiler are you using (e.g. python version or version of gcc)
MSVC Toolchain v141 (Visual Studio 2017)
### What did you do?
If possible, provide a recipe for reproducing the error. Try being specific and include code snippets if helpful.
The simplest reproduction case is a client attempting to join two different servers. One of the server is not launched, meaning we expect the calls toward this server to fail. This is relevant because we did not reproduce when only attempting to connect to the one server that is up.
both server adresses are local to the client (localhost).
We set the GRPC_ARG_KEEPALIVE_TIME_MS to 10s.
GRPC_ARG_KEEPALIVE_TIMEOUT_MS to 5s, On both the client and the server.
So, after 10 second of inactivity, each of them initate a ping toward the other.
At the beginning of the scenario, we make quick calls to both server. The first server successfully processes the call, and the second call, with the server being down, returns in error.
The problem arises when we wait 15 seconds after this call. Then the client logs a GRPC error
Please not that line offsets might be off, due to adding some additional logging in grpc files.
> (chttp2_transport.cc:2782) ipv4:127.0.0.1:10300: Keepalive watchdog fired. Closing transport.
Which is unexpected, given that the server is up and running.
When we traces packets using Wireshark, and coupling it with full GRPC trace, we can notice that:
* This is actually the first instance of the keepalive timer starting. Each other ping we see before have been triggered by the BDP ping.
* In the case of the Keepalive ping, we observe that the client does not react to the server ping.
* We do also observe that the server reacts to the client ping, send a PING ACK, but that is obviously not handled by the client.
Even though Wireshark capture shows a PING being sent, there is no trace of
> (tcp_windows.cc:189) TCP:000001D2CD40C610 on_read
in the client, that leads me to thinking that for whatever reason it stops to even try to read on the socket at some point ? This is where my knowledge of GRPC internals stops.
### What did you expect to see?
No watchdog fired when the server is up and running
### What did you see instead?
Watchdog firing every time.
### Anything else we should know about your project / environment?
[watchdog.log](https://github.com/grpc/grpc/files/3417669/watchdog.log)
127.0.0.1:10300 is the Server that is running, 127.0.0.1:10333 is the server that is down.
The watchdog fires at 10:56:07, the ping is sent at 10:56:02. At this point we should see a READ on :
* The ACK of the client's ping.
* The ping from the server
but we don't see any of those. The only activity we see is at 10:56:05, when we attempt to reconnect to 10333. | 1.0 | When a keepalive ping is triggered, its watchdog is always fired - ### What version of gRPC and what language are you using?
We're using grpc 1.19.0, in C++.
### What operating system (Linux, Windows,...) and version?
We're using it on various versions of Windows. More specifically, i do reproduce it on Windows 10.
### What runtime / compiler are you using (e.g. python version or version of gcc)
MSVC Toolchain v141 (Visual Studio 2017)
### What did you do?
If possible, provide a recipe for reproducing the error. Try being specific and include code snippets if helpful.
The simplest reproduction case is a client attempting to join two different servers. One of the server is not launched, meaning we expect the calls toward this server to fail. This is relevant because we did not reproduce when only attempting to connect to the one server that is up.
both server adresses are local to the client (localhost).
We set the GRPC_ARG_KEEPALIVE_TIME_MS to 10s.
GRPC_ARG_KEEPALIVE_TIMEOUT_MS to 5s, On both the client and the server.
So, after 10 second of inactivity, each of them initate a ping toward the other.
At the beginning of the scenario, we make quick calls to both server. The first server successfully processes the call, and the second call, with the server being down, returns in error.
The problem arises when we wait 15 seconds after this call. Then the client logs a GRPC error
Please not that line offsets might be off, due to adding some additional logging in grpc files.
> (chttp2_transport.cc:2782) ipv4:127.0.0.1:10300: Keepalive watchdog fired. Closing transport.
Which is unexpected, given that the server is up and running.
When we traces packets using Wireshark, and coupling it with full GRPC trace, we can notice that:
* This is actually the first instance of the keepalive timer starting. Each other ping we see before have been triggered by the BDP ping.
* In the case of the Keepalive ping, we observe that the client does not react to the server ping.
* We do also observe that the server reacts to the client ping, send a PING ACK, but that is obviously not handled by the client.
Even though Wireshark capture shows a PING being sent, there is no trace of
> (tcp_windows.cc:189) TCP:000001D2CD40C610 on_read
in the client, that leads me to thinking that for whatever reason it stops to even try to read on the socket at some point ? This is where my knowledge of GRPC internals stops.
### What did you expect to see?
No watchdog fired when the server is up and running
### What did you see instead?
Watchdog firing every time.
### Anything else we should know about your project / environment?
[watchdog.log](https://github.com/grpc/grpc/files/3417669/watchdog.log)
127.0.0.1:10300 is the Server that is running, 127.0.0.1:10333 is the server that is down.
The watchdog fires at 10:56:07, the ping is sent at 10:56:02. At this point we should see a READ on :
* The ACK of the client's ping.
* The ping from the server
but we don't see any of those. The only activity we see is at 10:56:05, when we attempt to reconnect to 10333. | non_main | when a keepalive ping is triggered its watchdog is always fired what version of grpc and what language are you using we re using grpc in c what operating system linux windows and version we re using it on various versions of windows more specifically i do reproduce it on windows what runtime compiler are you using e g python version or version of gcc msvc toolchain visual studio what did you do if possible provide a recipe for reproducing the error try being specific and include code snippets if helpful the simplest reproduction case is a client attempting to join two different servers one of the server is not launched meaning we expect the calls toward this server to fail this is relevant because we did not reproduce when only attempting to connect to the one server that is up both server adresses are local to the client localhost we set the grpc arg keepalive time ms to grpc arg keepalive timeout ms to on both the client and the server so after second of inactivity each of them initate a ping toward the other at the beginning of the scenario we make quick calls to both server the first server successfully processes the call and the second call with the server being down returns in error the problem arises when we wait seconds after this call then the client logs a grpc error please not that line offsets might be off due to adding some additional logging in grpc files transport cc keepalive watchdog fired closing transport which is unexpected given that the server is up and running when we traces packets using wireshark and coupling it with full grpc trace we can notice that this is actually the first instance of the keepalive timer starting each other ping we see before have been triggered by the bdp ping in the case of the keepalive ping we observe that the client does not react to the server ping we do also observe that the server reacts to the client ping send a ping ack but that is obviously not handled by the client even though wireshark capture shows a ping being sent there is no trace of tcp windows cc tcp on read in the client that leads me to thinking that for whatever reason it stops to even try to read on the socket at some point this is where my knowledge of grpc internals stops what did you expect to see no watchdog fired when the server is up and running what did you see instead watchdog firing every time anything else we should know about your project environment is the server that is running is the server that is down the watchdog fires at the ping is sent at at this point we should see a read on the ack of the client s ping the ping from the server but we don t see any of those the only activity we see is at when we attempt to reconnect to | 0 |
4,935 | 25,359,681,267 | IssuesEvent | 2022-11-20 18:56:30 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | closed | Install script fails on macOS | 🐛 bug 🚧 maintainer issue | **Description of the bug**
When running `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/deislabs/spiderlightning/main/install.sh)"` the final step fails with `install: ./release/slight: No such file or directory`
**To Reproduce**
Execute `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/deislabs/spiderlightning/main/install.sh)"` on macOS 13
**Additional context**
The `slight` file is created on the current working dir and not on `/usr/local/bin` as intended.
| True | Install script fails on macOS - **Description of the bug**
When running `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/deislabs/spiderlightning/main/install.sh)"` the final step fails with `install: ./release/slight: No such file or directory`
**To Reproduce**
Execute `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/deislabs/spiderlightning/main/install.sh)"` on macOS 13
**Additional context**
The `slight` file is created on the current working dir and not on `/usr/local/bin` as intended.
| main | install script fails on macos description of the bug when running bin bash c curl fssl the final step fails with install release slight no such file or directory to reproduce execute bin bash c curl fssl on macos additional context the slight file is created on the current working dir and not on usr local bin as intended | 1 |
820 | 4,442,284,669 | IssuesEvent | 2016-08-19 12:55:33 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Unarchive fails on MacOS: "Unexpected error when accessing exploded file: [Errno 2]" | bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Unarchive
##### ANSIBLE VERSION
Tested with three versions (devel is using up-to-date checkout of modules-core)
```
ansible 2.2.0 (devel 947877dcce) last updated 2016/08/11 11:34:15 (GMT -400)
lib/ansible/modules/core: (devel 23ebb98570) last updated 2016/08/11 11:46:27 (GMT -400)
lib/ansible/modules/extras: (detached HEAD 39153ea154) last updated 2016/08/10 23:59:44 (GMT -400)
config file =
configured module search path = Default w/o overrides
```
Also:
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
*Works correctly with:*
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
default config
##### OS / ENVIRONMENT
Mac OS X 10.11.6
##### SUMMARY
This seems to be Mac specific. (tested on/against 10.11.6)
Unarchive fails on Mac OS X when playbook is run locally or when targeting a remote Mac. Included playbook works correctly under Ansible v2.1.0.0.
Errors are identical running remote or locally.
The local playbook works correctly when run on Ubuntu 14.04.4 LTS from the most recent Git checkout. The command also succeeds when targeting an Ubuntu box.
Various `copy` and `remote_src` options had no effect.
##### STEPS TO REPRODUCE
Run the following playbook on a Mac (all my available test machines are running 10.11.6)
```
---
- hosts: localhost
become: no
connection: local
tasks:
- name: download archive
get_url:
url: https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip
# url: https://wordpress.org/latest.zip
dest: /tmp/
force: yes
register: archive
- name: Unpack downloaded archive
unarchive:
src: '{{ archive.dest }}'
dest: /tmp/
# copy: no
remote_src: no
list_files: yes
```
##### EXPECTED RESULTS
Zip archive is uncompressed
##### ACTUAL RESULTS
Module failure. Output is from latest checkout.
```
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAYBOOK: ansible-local-test.yaml **********************************************
1 plays in /Users/joe/Desktop/ansible-local-test.yaml
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360 `" && echo ansible-tmp-1470935578.63-176735032320360="` echo $HOME/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmp3C9ALR TO /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/ /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py; rm -rf "/Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [download archive] ********************************************************
task path: /Users/joe/Desktop/ansible-local-test.yaml:9
Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/network/basics/get_url.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550 `" && echo ansible-tmp-1470935579.12-83816721022550="` echo $HOME/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmprUh8qy TO /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/ /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py; rm -rf "/Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"checksum_dest": null,
"checksum_src": "c06bd7b6da48f8e1bafb5f09d3058f39ee77b5d8",
"dest": "/tmp/basic-wordpress-vagrant-master.zip",
"gid": 0,
"group": "wheel",
"invocation": {
"module_args": {
"backup": false,
"checksum": "",
"content": null,
"delimiter": null,
"dest": "/tmp/",
"directory_mode": null,
"follow": false,
"force": true,
"force_basic_auth": false,
"group": null,
"headers": null,
"http_agent": "ansible-httpget",
"mode": null,
"owner": null,
"path": "/tmp/basic-wordpress-vagrant-master.zip",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"sha256sum": "",
"src": null,
"timeout": 10,
"tmp_dest": "",
"unsafe_writes": null,
"url": "https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip",
"url_password": null,
"url_username": null,
"use_proxy": true,
"validate_certs": true
},
"module_name": "get_url"
},
"md5sum": "76e57a9d81852954b3cb1ae66c2649c7",
"mode": "0644",
"msg": "OK (14631 bytes)",
"owner": "joe",
"size": 14631,
"src": "/var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpNVd4BN",
"state": "file",
"uid": 502,
"url": "https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip"
}
TASK [Unpack downloaded archive] ***********************************************
task path: /Users/joe/Desktop/ansible-local-test.yaml:23
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153 `" && echo ansible-tmp-1470935580.87-202761726912153="` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153 `" ) && sleep 0'
Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/files/stat.py
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272 `" && echo ansible-tmp-1470935580.99-37733200780272="` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpnwez_h TO /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/ /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py; rm -rf "/Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/" > /dev/null 2>&1 && sleep 0'
<127.0.0.1> PUT /private/tmp/basic-wordpress-vagrant-master.zip TO /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/ /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source && sleep 0'
Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/files/unarchive.py
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389 `" && echo ansible-tmp-1470935581.39-252312604650389="` echo $HOME/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpGmuhUh TO /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/ /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py; rm -rf "/Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/" > /dev/null 2>&1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"dest": "/tmp/",
"failed": true,
"gid": 0,
"group": "wheel",
"handler": "TgzArchive",
"invocation": {
"module_args": {
"backup": null,
"content": null,
"copy": true,
"creates": null,
"delimiter": null,
"dest": "/tmp/",
"directory_mode": null,
"exclude": [],
"extra_opts": [],
"follow": false,
"force": null,
"group": null,
"keep_newer": false,
"list_files": true,
"mode": null,
"original_basename": "basic-wordpress-vagrant-master.zip",
"owner": null,
"regexp": null,
"remote_src": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source",
"unsafe_writes": null,
"validate_certs": true
}
},
"mode": "01777",
"msg": "Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/tmp/basic-wordpress-vagrant-master/'",
"owner": "root",
"size": 1428,
"src": "/Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source",
"state": "directory",
"uid": 0
}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/Users/joe/Desktop/ansible-local-test.retry
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
```
| True | Unarchive fails on MacOS: "Unexpected error when accessing exploded file: [Errno 2]" - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Unarchive
##### ANSIBLE VERSION
Tested with three versions (devel is using up-to-date checkout of modules-core)
```
ansible 2.2.0 (devel 947877dcce) last updated 2016/08/11 11:34:15 (GMT -400)
lib/ansible/modules/core: (devel 23ebb98570) last updated 2016/08/11 11:46:27 (GMT -400)
lib/ansible/modules/extras: (detached HEAD 39153ea154) last updated 2016/08/10 23:59:44 (GMT -400)
config file =
configured module search path = Default w/o overrides
```
Also:
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
*Works correctly with:*
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
default config
##### OS / ENVIRONMENT
Mac OS X 10.11.6
##### SUMMARY
This seems to be Mac specific. (tested on/against 10.11.6)
Unarchive fails on Mac OS X when playbook is run locally or when targeting a remote Mac. Included playbook works correctly under Ansible v2.1.0.0.
Errors are identical running remote or locally.
The local playbook works correctly when run on Ubuntu 14.04.4 LTS from the most recent Git checkout. The command also succeeds when targeting an Ubuntu box.
Various `copy` and `remote_src` options had no effect.
##### STEPS TO REPRODUCE
Run the following playbook on a Mac (all my available test machines are running 10.11.6)
```
---
- hosts: localhost
become: no
connection: local
tasks:
- name: download archive
get_url:
url: https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip
# url: https://wordpress.org/latest.zip
dest: /tmp/
force: yes
register: archive
- name: Unpack downloaded archive
unarchive:
src: '{{ archive.dest }}'
dest: /tmp/
# copy: no
remote_src: no
list_files: yes
```
##### EXPECTED RESULTS
Zip archive is uncompressed
##### ACTUAL RESULTS
Module failure. Output is from latest checkout.
```
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAYBOOK: ansible-local-test.yaml **********************************************
1 plays in /Users/joe/Desktop/ansible-local-test.yaml
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360 `" && echo ansible-tmp-1470935578.63-176735032320360="` echo $HOME/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmp3C9ALR TO /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/ /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/setup.py; rm -rf "/Users/joe/.ansible/tmp/ansible-tmp-1470935578.63-176735032320360/" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [download archive] ********************************************************
task path: /Users/joe/Desktop/ansible-local-test.yaml:9
Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/network/basics/get_url.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550 `" && echo ansible-tmp-1470935579.12-83816721022550="` echo $HOME/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmprUh8qy TO /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/ /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/get_url.py; rm -rf "/Users/joe/.ansible/tmp/ansible-tmp-1470935579.12-83816721022550/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"checksum_dest": null,
"checksum_src": "c06bd7b6da48f8e1bafb5f09d3058f39ee77b5d8",
"dest": "/tmp/basic-wordpress-vagrant-master.zip",
"gid": 0,
"group": "wheel",
"invocation": {
"module_args": {
"backup": false,
"checksum": "",
"content": null,
"delimiter": null,
"dest": "/tmp/",
"directory_mode": null,
"follow": false,
"force": true,
"force_basic_auth": false,
"group": null,
"headers": null,
"http_agent": "ansible-httpget",
"mode": null,
"owner": null,
"path": "/tmp/basic-wordpress-vagrant-master.zip",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"sha256sum": "",
"src": null,
"timeout": 10,
"tmp_dest": "",
"unsafe_writes": null,
"url": "https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip",
"url_password": null,
"url_username": null,
"use_proxy": true,
"validate_certs": true
},
"module_name": "get_url"
},
"md5sum": "76e57a9d81852954b3cb1ae66c2649c7",
"mode": "0644",
"msg": "OK (14631 bytes)",
"owner": "joe",
"size": 14631,
"src": "/var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpNVd4BN",
"state": "file",
"uid": 502,
"url": "https://github.com/ideasonpurpose/basic-wordpress-vagrant/archive/master.zip"
}
TASK [Unpack downloaded archive] ***********************************************
task path: /Users/joe/Desktop/ansible-local-test.yaml:23
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: joe
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153 `" && echo ansible-tmp-1470935580.87-202761726912153="` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153 `" ) && sleep 0'
Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/files/stat.py
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272 `" && echo ansible-tmp-1470935580.99-37733200780272="` echo $HOME/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpnwez_h TO /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/ /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/stat.py; rm -rf "/Users/joe/.ansible/tmp/ansible-tmp-1470935580.99-37733200780272/" > /dev/null 2>&1 && sleep 0'
<127.0.0.1> PUT /private/tmp/basic-wordpress-vagrant-master.zip TO /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/ /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source && sleep 0'
Using module file /Users/joe/ansible-dev/lib/ansible/modules/core/files/unarchive.py
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389 `" && echo ansible-tmp-1470935581.39-252312604650389="` echo $HOME/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/f2/lskp80tm8xj5230059b53f580000gp/T/tmpGmuhUh TO /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/ /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python/bin/python2.7 /Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/unarchive.py; rm -rf "/Users/joe/.ansible/tmp/ansible-tmp-1470935581.39-252312604650389/" > /dev/null 2>&1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"dest": "/tmp/",
"failed": true,
"gid": 0,
"group": "wheel",
"handler": "TgzArchive",
"invocation": {
"module_args": {
"backup": null,
"content": null,
"copy": true,
"creates": null,
"delimiter": null,
"dest": "/tmp/",
"directory_mode": null,
"exclude": [],
"extra_opts": [],
"follow": false,
"force": null,
"group": null,
"keep_newer": false,
"list_files": true,
"mode": null,
"original_basename": "basic-wordpress-vagrant-master.zip",
"owner": null,
"regexp": null,
"remote_src": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source",
"unsafe_writes": null,
"validate_certs": true
}
},
"mode": "01777",
"msg": "Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/tmp/basic-wordpress-vagrant-master/'",
"owner": "root",
"size": 1428,
"src": "/Users/joe/.ansible/tmp/ansible-tmp-1470935580.87-202761726912153/source",
"state": "directory",
"uid": 0
}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/Users/joe/Desktop/ansible-local-test.retry
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
```
| main | unarchive fails on macos unexpected error when accessing exploded file issue type bug report component name unarchive ansible version tested with three versions devel is using up to date checkout of modules core ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides also ansible config file configured module search path default w o overrides works correctly with ansible config file configured module search path default w o overrides configuration default config os environment mac os x summary this seems to be mac specific tested on against unarchive fails on mac os x when playbook is run locally or when targeting a remote mac included playbook works correctly under ansible errors are identical running remote or locally the local playbook works correctly when run on ubuntu lts from the most recent git checkout the command also succeeds when targeting an ubuntu box various copy and remote src options had no effect steps to reproduce run the following playbook on a mac all my available test machines are running hosts localhost become no connection local tasks name download archive get url url url dest tmp force yes register archive name unpack downloaded archive unarchive src archive dest dest tmp copy no remote src no list files yes expected results zip archive is uncompressed actual results module failure output is from latest checkout host file not found etc ansible hosts provided hosts list is empty only localhost is available playbook ansible local test yaml plays in users joe desktop ansible local test yaml play task using module file users joe ansible dev lib ansible modules core system setup py establish local connection for user joe exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t to users joe ansible tmp ansible tmp setup py exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp setup py sleep exec bin sh c usr local opt python bin users joe ansible tmp ansible tmp setup py rm rf users joe ansible tmp ansible tmp dev null sleep ok task task path users joe desktop ansible local test yaml using module file users joe ansible dev lib ansible modules core network basics get url py establish local connection for user joe exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t to users joe ansible tmp ansible tmp get url py exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp get url py sleep exec bin sh c usr local opt python bin users joe ansible tmp ansible tmp get url py rm rf users joe ansible tmp ansible tmp dev null sleep changed changed true checksum dest null checksum src dest tmp basic wordpress vagrant master zip gid group wheel invocation module args backup false checksum content null delimiter null dest tmp directory mode null follow false force true force basic auth false group null headers null http agent ansible httpget mode null owner null path tmp basic wordpress vagrant master zip regexp null remote src null selevel null serole null setype null seuser null src null timeout tmp dest unsafe writes null url url password null url username null use proxy true validate certs true module name get url mode msg ok bytes owner joe size src var folders t state file uid url task task path users joe desktop ansible local test yaml establish local connection for user joe exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep using module file users joe ansible dev lib ansible modules core files stat py exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpnwez h to users joe ansible tmp ansible tmp stat py exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp stat py sleep exec bin sh c usr local opt python bin users joe ansible tmp ansible tmp stat py rm rf users joe ansible tmp ansible tmp dev null sleep put private tmp basic wordpress vagrant master zip to users joe ansible tmp ansible tmp source exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp source sleep using module file users joe ansible dev lib ansible modules core files unarchive py exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpgmuhuh to users joe ansible tmp ansible tmp unarchive py exec bin sh c chmod u x users joe ansible tmp ansible tmp users joe ansible tmp ansible tmp unarchive py sleep exec bin sh c usr local opt python bin users joe ansible tmp ansible tmp unarchive py rm rf users joe ansible tmp ansible tmp dev null sleep exec bin sh c rm f r users joe ansible tmp ansible tmp dev null sleep fatal failed changed false dest tmp failed true gid group wheel handler tgzarchive invocation module args backup null content null copy true creates null delimiter null dest tmp directory mode null exclude extra opts follow false force null group null keep newer false list files true mode null original basename basic wordpress vagrant master zip owner null regexp null remote src false selevel null serole null setype null seuser null src users joe ansible tmp ansible tmp source unsafe writes null validate certs true mode msg unexpected error when accessing exploded file no such file or directory tmp basic wordpress vagrant master owner root size src users joe ansible tmp ansible tmp source state directory uid no more hosts left to retry use limit users joe desktop ansible local test retry play recap localhost ok changed unreachable failed | 1 |
19,604 | 25,959,130,910 | IssuesEvent | 2022-12-18 16:54:11 | tokio-rs/tokio | https://api.github.com/repos/tokio-rs/tokio | closed | Possible bug in tokio::process::Command | C-bug A-tokio M-process | **Version**
tokio = { version = "1.23.0", features = ["full"] }
**Platform**
Linux 160R 5.19.17-2-MANJARO
**Description**
From documentation of `tokio::process::Command::wait`:
> The stdin handle to the child process, if any, will be closed
> before waiting. This helps avoid deadlock: it ensures that the
> child does not block waiting for input from the parent, while
> the parent waits for the child to exit.
> If the caller wishes to explicitly control when the child's stdin
> handle is closed, they may `.take()` it before calling `.wait()`:
>```rust
> use tokio::io::AsyncWriteExt;
> use tokio::process::Command;
> use std::process::Stdio;
>
> #[tokio::main]
> async fn main() {
> let mut child = Command::new("cat")
> .stdin(Stdio::piped())
> .spawn()
> .unwrap();
>
> let mut stdin = child.stdin.take().unwrap();
> tokio::spawn(async move {
> // do something with stdin here...
> stdin.write_all(b"hello world\n").await.unwrap();
>
> // then drop when finished
> drop(stdin);
> });
>
> // wait for the process to complete
> let _ = child.wait().await;
>}
>```
But there seems to be something wrong.
I tried this code:
```rust
#[tokio::main]
async fn main() {
let mut p = tokio::process::Command::new("cat")
.stdin(std::fs::OpenOptions::new()
.read(true)
.open("/dev/tty")
.expect("cannot open tty")
)
.spawn()
.expect("cannot spawn");
p.stdin
.take()
.expect("cannot get stdin");
}
```
I expected it to execute without panics.
Instead, it panicked with `cannot get stdin` message.
Is it okay for `p.stdin` to be `None` in this case?
| 1.0 | Possible bug in tokio::process::Command - **Version**
tokio = { version = "1.23.0", features = ["full"] }
**Platform**
Linux 160R 5.19.17-2-MANJARO
**Description**
From documentation of `tokio::process::Command::wait`:
> The stdin handle to the child process, if any, will be closed
> before waiting. This helps avoid deadlock: it ensures that the
> child does not block waiting for input from the parent, while
> the parent waits for the child to exit.
> If the caller wishes to explicitly control when the child's stdin
> handle is closed, they may `.take()` it before calling `.wait()`:
>```rust
> use tokio::io::AsyncWriteExt;
> use tokio::process::Command;
> use std::process::Stdio;
>
> #[tokio::main]
> async fn main() {
> let mut child = Command::new("cat")
> .stdin(Stdio::piped())
> .spawn()
> .unwrap();
>
> let mut stdin = child.stdin.take().unwrap();
> tokio::spawn(async move {
> // do something with stdin here...
> stdin.write_all(b"hello world\n").await.unwrap();
>
> // then drop when finished
> drop(stdin);
> });
>
> // wait for the process to complete
> let _ = child.wait().await;
>}
>```
But there seems to be something wrong.
I tried this code:
```rust
#[tokio::main]
async fn main() {
let mut p = tokio::process::Command::new("cat")
.stdin(std::fs::OpenOptions::new()
.read(true)
.open("/dev/tty")
.expect("cannot open tty")
)
.spawn()
.expect("cannot spawn");
p.stdin
.take()
.expect("cannot get stdin");
}
```
I expected it to execute without panics.
Instead, it panicked with `cannot get stdin` message.
Is it okay for `p.stdin` to be `None` in this case?
| non_main | possible bug in tokio process command version tokio version features platform linux manjaro description from documentation of tokio process command wait the stdin handle to the child process if any will be closed before waiting this helps avoid deadlock it ensures that the child does not block waiting for input from the parent while the parent waits for the child to exit if the caller wishes to explicitly control when the child s stdin handle is closed they may take it before calling wait rust use tokio io asyncwriteext use tokio process command use std process stdio async fn main let mut child command new cat stdin stdio piped spawn unwrap let mut stdin child stdin take unwrap tokio spawn async move do something with stdin here stdin write all b hello world n await unwrap then drop when finished drop stdin wait for the process to complete let child wait await but there seems to be something wrong i tried this code rust async fn main let mut p tokio process command new cat stdin std fs openoptions new read true open dev tty expect cannot open tty spawn expect cannot spawn p stdin take expect cannot get stdin i expected it to execute without panics instead it panicked with cannot get stdin message is it okay for p stdin to be none in this case | 0 |
1,866 | 6,577,487,371 | IssuesEvent | 2017-09-12 01:15:41 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ansible subversion module silently fails on network problems | affects_2.0 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
subversion
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nothing special
##### OS / ENVIRONMENT
ubuntu 14.04.4 trusty fully upgraded
##### SUMMARY
sometimes - because of network errors - subversion can't finish the checkout, but this should raise an error, ansible though moves on believing evertyhing is okay
I could also say, that if I do an "svn update" after a successful ansible checkout zero changes should happen (if the server side did not change of course).
##### STEPS TO REPRODUCE
create or find a large svn repo
checkout with subversion module
stop your network interface while downloading
##### EXPECTED RESULTS
ansible should exit with an error
##### ACTUAL RESULTS
ansible moves on like if checkout was a success
| True | ansible subversion module silently fails on network problems - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
subversion
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nothing special
##### OS / ENVIRONMENT
ubuntu 14.04.4 trusty fully upgraded
##### SUMMARY
sometimes - because of network errors - subversion can't finish the checkout, but this should raise an error, ansible though moves on believing evertyhing is okay
I could also say, that if I do an "svn update" after a successful ansible checkout zero changes should happen (if the server side did not change of course).
##### STEPS TO REPRODUCE
create or find a large svn repo
checkout with subversion module
stop your network interface while downloading
##### EXPECTED RESULTS
ansible should exit with an error
##### ACTUAL RESULTS
ansible moves on like if checkout was a success
| main | ansible subversion module silently fails on network problems issue type bug report component name subversion ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration nothing special os environment ubuntu trusty fully upgraded summary sometimes because of network errors subversion can t finish the checkout but this should raise an error ansible though moves on believing evertyhing is okay i could also say that if i do an svn update after a successful ansible checkout zero changes should happen if the server side did not change of course steps to reproduce create or find a large svn repo checkout with subversion module stop your network interface while downloading expected results ansible should exit with an error actual results ansible moves on like if checkout was a success | 1 |
1,924 | 6,588,331,931 | IssuesEvent | 2017-09-14 02:25:07 | tomchentw/react-google-maps | https://api.github.com/repos/tomchentw/react-google-maps | closed | Considering switching to airbnb styleguide | Maintainers_please_review | https://github.com/airbnb/javascript
Right now we extend `react-app`'s eslint configuration with a few additions such as quotes, jsx-quotes and comma dangle. It's great that we have a styleguide in place but it would be helpful for future contributors to take on a canonical and idiomatic way of writing modern javascript.
This issue is more of a discussion piece before I do a quick refactor. Input appreciated 👍
For reference, our styles:
https://github.com/tomchentw/react-google-maps/blob/master/.eslintrc
``` javascript
{
"extends": "react-app",
"rules": {
// Possible Errors
"comma-dangle": ["error", "always-multiline"],
// Stylistic Issues
"jsx-quotes": ["error", "prefer-double"],
"quotes": ["error", "backtick"]
}
}
```
| True | Considering switching to airbnb styleguide - https://github.com/airbnb/javascript
Right now we extend `react-app`'s eslint configuration with a few additions such as quotes, jsx-quotes and comma dangle. It's great that we have a styleguide in place but it would be helpful for future contributors to take on a canonical and idiomatic way of writing modern javascript.
This issue is more of a discussion piece before I do a quick refactor. Input appreciated 👍
For reference, our styles:
https://github.com/tomchentw/react-google-maps/blob/master/.eslintrc
``` javascript
{
"extends": "react-app",
"rules": {
// Possible Errors
"comma-dangle": ["error", "always-multiline"],
// Stylistic Issues
"jsx-quotes": ["error", "prefer-double"],
"quotes": ["error", "backtick"]
}
}
```
| main | considering switching to airbnb styleguide right now we extend react app s eslint configuration with a few additions such as quotes jsx quotes and comma dangle it s great that we have a styleguide in place but it would be helpful for future contributors to take on a canonical and idiomatic way of writing modern javascript this issue is more of a discussion piece before i do a quick refactor input appreciated 👍 for reference our styles javascript extends react app rules possible errors comma dangle stylistic issues jsx quotes quotes | 1 |
56,844 | 8,128,795,303 | IssuesEvent | 2018-08-17 13:09:04 | rucio/rucio | https://api.github.com/repos/rucio/rucio | closed | Mention upload with PFN should also be with no_register | Documentation enhancement | Motivation
----------
if we use PFN, we should explain that also no_register is required
https://rucio.readthedocs.io/en/latest/api/upload.html?highlight=pfn
Modification
------------
| 1.0 | Mention upload with PFN should also be with no_register - Motivation
----------
if we use PFN, we should explain that also no_register is required
https://rucio.readthedocs.io/en/latest/api/upload.html?highlight=pfn
Modification
------------
| non_main | mention upload with pfn should also be with no register motivation if we use pfn we should explain that also no register is required modification | 0 |
1,521 | 6,572,215,681 | IssuesEvent | 2017-09-11 00:09:24 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | nmcli reports changed status even if nothing needs to change | affects_2.0 bug_report networking waiting_on_maintainer | ##### Issue Type:
- Bug Report
##### Plugin Name:
nmcli
##### Ansible Version:
```
ansible 2.0.1.0
config file = /root/ansible-boulder/ansible.cfg
configured module search path = /usr/share/ansible/
```
Using code from #1840
##### Ansible Configuration:
```
inventory = inventory
```
##### Environment:
EL7
##### Summary:
nmcli does not know the previous state of an interface so it always reports a status of changed even when nothing does. I would propose parsing the output of "nmcl con show <NAME>", then applying the changed and seeing if there are any differences.
##### Steps To Reproduce:
```
- name: Configure dhcp-client-id for CORA network
nmcli: state=present type=ethernet conn_name=CORA dhcp_client_id={{ ansible_fqdn }}
tags: network
```
<!-- You can also paste gist.github.com links for larger files. -->
##### Expected Results:
changed=0
##### Actual Results:
```
TASK [laptop : Configure dhcp-client-id for CORA network] **********************
task path: /root/ansible-boulder/roles/laptop/tasks/main.yml:8
ESTABLISH LOCAL CONNECTION FOR USER: root
barry.cora.nwra.com EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882 `" )'
barry.cora.nwra.com PUT /tmp/tmp1156fd TO /root/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882/nmcli
barry.cora.nwra.com EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /root/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882/nmcli; rm -rf "/root/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882/" > /dev/null 2>&1'
changed: [barry.cora.nwra.com] => {"Exists": "Connections do exist so we are modifying them", "changed": true, "conn_name": "CORA", "invocation": {"module_args": {"ageingtime": "300", "arp_interval": null, "arp_ip_target": null, "autoconnect": null, "conn_name": "CORA", "dhcp_client_id": "barry.cora.nwra.com", "dns4": null, "dns6": null, "downdelay": null, "egress": null, "flags": null, "forwarddelay": "15", "gw4": null, "gw6": null, "hellotime": "2", "ifname": null, "ingress": null, "ip4": null, "ip6": null, "mac": null, "master": null, "maxage": "20", "miimon": null, "mode": "balance-rr", "mtu": null, "priority": "128", "slavepriority": "32", "state": "present", "stp": "yes", "type": "ethernet", "updelay": null, "vlandev": null, "vlanid": null}, "module_name": "nmcli"}, "state": "present"}
```
| True | nmcli reports changed status even if nothing needs to change - ##### Issue Type:
- Bug Report
##### Plugin Name:
nmcli
##### Ansible Version:
```
ansible 2.0.1.0
config file = /root/ansible-boulder/ansible.cfg
configured module search path = /usr/share/ansible/
```
Using code from #1840
##### Ansible Configuration:
```
inventory = inventory
```
##### Environment:
EL7
##### Summary:
nmcli does not know the previous state of an interface so it always reports a status of changed even when nothing does. I would propose parsing the output of "nmcl con show <NAME>", then applying the changed and seeing if there are any differences.
##### Steps To Reproduce:
```
- name: Configure dhcp-client-id for CORA network
nmcli: state=present type=ethernet conn_name=CORA dhcp_client_id={{ ansible_fqdn }}
tags: network
```
<!-- You can also paste gist.github.com links for larger files. -->
##### Expected Results:
changed=0
##### Actual Results:
```
TASK [laptop : Configure dhcp-client-id for CORA network] **********************
task path: /root/ansible-boulder/roles/laptop/tasks/main.yml:8
ESTABLISH LOCAL CONNECTION FOR USER: root
barry.cora.nwra.com EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882 `" )'
barry.cora.nwra.com PUT /tmp/tmp1156fd TO /root/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882/nmcli
barry.cora.nwra.com EXEC /bin/sh -c 'LANG=C LC_ALL=C LC_MESSAGES=C /usr/bin/python /root/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882/nmcli; rm -rf "/root/.ansible/tmp/ansible-tmp-1457734801.48-158802433202882/" > /dev/null 2>&1'
changed: [barry.cora.nwra.com] => {"Exists": "Connections do exist so we are modifying them", "changed": true, "conn_name": "CORA", "invocation": {"module_args": {"ageingtime": "300", "arp_interval": null, "arp_ip_target": null, "autoconnect": null, "conn_name": "CORA", "dhcp_client_id": "barry.cora.nwra.com", "dns4": null, "dns6": null, "downdelay": null, "egress": null, "flags": null, "forwarddelay": "15", "gw4": null, "gw6": null, "hellotime": "2", "ifname": null, "ingress": null, "ip4": null, "ip6": null, "mac": null, "master": null, "maxage": "20", "miimon": null, "mode": "balance-rr", "mtu": null, "priority": "128", "slavepriority": "32", "state": "present", "stp": "yes", "type": "ethernet", "updelay": null, "vlandev": null, "vlanid": null}, "module_name": "nmcli"}, "state": "present"}
```
| main | nmcli reports changed status even if nothing needs to change issue type bug report plugin name nmcli ansible version ansible config file root ansible boulder ansible cfg configured module search path usr share ansible using code from ansible configuration inventory inventory environment summary nmcli does not know the previous state of an interface so it always reports a status of changed even when nothing does i would propose parsing the output of nmcl con show then applying the changed and seeing if there are any differences steps to reproduce name configure dhcp client id for cora network nmcli state present type ethernet conn name cora dhcp client id ansible fqdn tags network expected results changed actual results task task path root ansible boulder roles laptop tasks main yml establish local connection for user root barry cora nwra com exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp barry cora nwra com put tmp to root ansible tmp ansible tmp nmcli barry cora nwra com exec bin sh c lang c lc all c lc messages c usr bin python root ansible tmp ansible tmp nmcli rm rf root ansible tmp ansible tmp dev null changed exists connections do exist so we are modifying them changed true conn name cora invocation module args ageingtime arp interval null arp ip target null autoconnect null conn name cora dhcp client id barry cora nwra com null null downdelay null egress null flags null forwarddelay null null hellotime ifname null ingress null null null mac null master null maxage miimon null mode balance rr mtu null priority slavepriority state present stp yes type ethernet updelay null vlandev null vlanid null module name nmcli state present | 1 |
268,414 | 8,406,680,819 | IssuesEvent | 2018-10-11 18:38:15 | trufflesuite/ganache-cli | https://api.github.com/repos/trufflesuite/ganache-cli | closed | Calling getBalance on contract results in 3 * sent amount | needs validation priority-high | I send x wei to a contract address with
`web3.eth.sendTransaction({from: accounts[0], to: contract.address, value: x})`
then I check the balance
`web3.eth.getBalance(contract.address)`
It returns 3 * x...
This is consistent if I change x. It also deducts accounts[0] balance by 3 * x. Regardless of what my contract does, this should not be happening. Oddly enough, the token distribution amounts are changed by the correct amounts, without the 3x factor.
Truffle v3.4.4
OSX 10.12.4
TestRPC v4.0.1
| 1.0 | Calling getBalance on contract results in 3 * sent amount - I send x wei to a contract address with
`web3.eth.sendTransaction({from: accounts[0], to: contract.address, value: x})`
then I check the balance
`web3.eth.getBalance(contract.address)`
It returns 3 * x...
This is consistent if I change x. It also deducts accounts[0] balance by 3 * x. Regardless of what my contract does, this should not be happening. Oddly enough, the token distribution amounts are changed by the correct amounts, without the 3x factor.
Truffle v3.4.4
OSX 10.12.4
TestRPC v4.0.1
| non_main | calling getbalance on contract results in sent amount i send x wei to a contract address with eth sendtransaction from accounts to contract address value x then i check the balance eth getbalance contract address it returns x this is consistent if i change x it also deducts accounts balance by x regardless of what my contract does this should not be happening oddly enough the token distribution amounts are changed by the correct amounts without the factor truffle osx testrpc | 0 |
4,470 | 23,304,851,426 | IssuesEvent | 2022-08-07 21:35:37 | cncf/glossary | https://api.github.com/repos/cncf/glossary | closed | Add tags | maintainers | Activate the tagging feature in docsy. Ideally these tags will go on top of the categories. | True | Add tags - Activate the tagging feature in docsy. Ideally these tags will go on top of the categories. | main | add tags activate the tagging feature in docsy ideally these tags will go on top of the categories | 1 |
134,686 | 10,927,007,030 | IssuesEvent | 2019-11-22 15:49:26 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | JvmErgonomicsTests.testExtractValidHeapSizeNoOptionPresent fails on 7.4 | :Core/Infra/Core >test-failure | It fails with:
```
org.elasticsearch.tools.launchers.JvmErgonomicsTests > testExtractValidHeapSizeNoOptionPresent FAILED
java.lang.AssertionError:
Expected: a value greater than <0L>
but: <0L> was equal to <0L>
at __randomizedtesting.SeedInfo.seed([22C99F519C4644E0:535C63ED1A8DEF9E]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.tools.launchers.JvmErgonomicsTests.testExtractValidHeapSizeNoOptionPresent(JvmErgonomicsTests.java:59)
```
What's strange is that this did not print any reproduction line, so I wasn't able to try the exact test.
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+7.4+multijob-windows-compatibility/os=windows-2019/64/consoleFull
https://gradle-enterprise.elastic.co/s/mm3dqwj56d4aa/console-log | 1.0 | JvmErgonomicsTests.testExtractValidHeapSizeNoOptionPresent fails on 7.4 - It fails with:
```
org.elasticsearch.tools.launchers.JvmErgonomicsTests > testExtractValidHeapSizeNoOptionPresent FAILED
java.lang.AssertionError:
Expected: a value greater than <0L>
but: <0L> was equal to <0L>
at __randomizedtesting.SeedInfo.seed([22C99F519C4644E0:535C63ED1A8DEF9E]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.tools.launchers.JvmErgonomicsTests.testExtractValidHeapSizeNoOptionPresent(JvmErgonomicsTests.java:59)
```
What's strange is that this did not print any reproduction line, so I wasn't able to try the exact test.
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+7.4+multijob-windows-compatibility/os=windows-2019/64/consoleFull
https://gradle-enterprise.elastic.co/s/mm3dqwj56d4aa/console-log | non_main | jvmergonomicstests testextractvalidheapsizenooptionpresent fails on it fails with org elasticsearch tools launchers jvmergonomicstests testextractvalidheapsizenooptionpresent failed java lang assertionerror expected a value greater than but was equal to at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org junit assert assertthat assert java at org elasticsearch tools launchers jvmergonomicstests testextractvalidheapsizenooptionpresent jvmergonomicstests java what s strange is that this did not print any reproduction line so i wasn t able to try the exact test | 0 |
4,991 | 25,602,654,860 | IssuesEvent | 2022-12-01 21:45:08 | Lunatic-Labs/pointless-project | https://api.github.com/repos/Lunatic-Labs/pointless-project | opened | Website URL | future maintainers | We want to shorten the website URL to ensure that any QR code can be represented in at most 25 characters. | True | Website URL - We want to shorten the website URL to ensure that any QR code can be represented in at most 25 characters. | main | website url we want to shorten the website url to ensure that any qr code can be represented in at most characters | 1 |
5,065 | 25,944,585,509 | IssuesEvent | 2022-12-16 22:27:38 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Show the record summary in the linked record input when setting a default value for a column | type: enhancement work: frontend status: ready restricted: maintainers | When setting a default value for an FK column, the LinkedRecordInput component should display the record summary.
## Status
- Blocked by #1660 because we don't have the UI to set the default value
| True | Show the record summary in the linked record input when setting a default value for a column - When setting a default value for an FK column, the LinkedRecordInput component should display the record summary.
## Status
- Blocked by #1660 because we don't have the UI to set the default value
| main | show the record summary in the linked record input when setting a default value for a column when setting a default value for an fk column the linkedrecordinput component should display the record summary status blocked by because we don t have the ui to set the default value | 1 |
1,367 | 5,895,610,382 | IssuesEvent | 2017-05-18 07:30:02 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Ansible calls "file" module instead of "copy" module if hashes match, gets confused if dest is a symlink | affects_1.9 bug_report waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
copy module
##### ANSIBLE VERSION
ansible 1.9.1
##### SUMMARY
If you create a task item with copy where follow=true and dest is currently a symlink, Ansible gets confused if the target for follow and the name of src aren't the same.
Sample playbook:
```
- hosts: localhost
tasks:
- copy: src=source dest=/tmp/dest follow=true
```
Setup commands:
```
ln -nsf realdest /tmp/dest
touch /tmp/realdest
echo asdf > source
```
First run:
```
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [copy src=source dest=/tmp/dest follow=true] ****************************
changed: [localhost]
PLAY RECAP ********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
```
Second run (and onwards):
```
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [copy src=source dest=/tmp/dest follow=true] ****************************
failed: [localhost] => {"checksum": "7d97e98f8af710c7e7fe703abc8f639e0ee507c4", "failed": true, "gid": 0, "group": "root", "mode": "0777", "owner": "root", "path": "/tmp/dest", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 8, "src": "source", "state": "link", "uid": 0}
msg: src file does not exist, use "force=yes" if you really want to create the link: /tmp/source
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/root/test.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
```
| True | Ansible calls "file" module instead of "copy" module if hashes match, gets confused if dest is a symlink - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
copy module
##### ANSIBLE VERSION
ansible 1.9.1
##### SUMMARY
If you create a task item with copy where follow=true and dest is currently a symlink, Ansible gets confused if the target for follow and the name of src aren't the same.
Sample playbook:
```
- hosts: localhost
tasks:
- copy: src=source dest=/tmp/dest follow=true
```
Setup commands:
```
ln -nsf realdest /tmp/dest
touch /tmp/realdest
echo asdf > source
```
First run:
```
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [copy src=source dest=/tmp/dest follow=true] ****************************
changed: [localhost]
PLAY RECAP ********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
```
Second run (and onwards):
```
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [copy src=source dest=/tmp/dest follow=true] ****************************
failed: [localhost] => {"checksum": "7d97e98f8af710c7e7fe703abc8f639e0ee507c4", "failed": true, "gid": 0, "group": "root", "mode": "0777", "owner": "root", "path": "/tmp/dest", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 8, "src": "source", "state": "link", "uid": 0}
msg: src file does not exist, use "force=yes" if you really want to create the link: /tmp/source
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/root/test.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
```
| main | ansible calls file module instead of copy module if hashes match gets confused if dest is a symlink issue type bug report component name copy module ansible version ansible summary if you create a task item with copy where follow true and dest is currently a symlink ansible gets confused if the target for follow and the name of src aren t the same sample playbook hosts localhost tasks copy src source dest tmp dest follow true setup commands ln nsf realdest tmp dest touch tmp realdest echo asdf source first run play gathering facts ok task changed play recap localhost ok changed unreachable failed second run and onwards play gathering facts ok task failed checksum failed true gid group root mode owner root path tmp dest secontext unconfined u object r user tmp t size src source state link uid msg src file does not exist use force yes if you really want to create the link tmp source fatal all hosts have already failed aborting play recap to retry use limit root test retry localhost ok changed unreachable failed | 1 |
11,891 | 3,025,349,543 | IssuesEvent | 2015-08-03 07:55:53 | achiu8/natandp | https://api.github.com/repos/achiu8/natandp | opened | colors......... | design | @ingachen @johnlin1214 @achiu8 -
nat said she wanted less colors and have it be mostly grey so i played around and came up with something like this, which she likes better:
<img width="1423" alt="screen shot 2015-08-03 at 12 49 28 am" src="https://cloud.githubusercontent.com/assets/1305229/9033140/935c99b4-3979-11e5-93ce-a2dd14811834.png">
<img width="1388" alt="screen shot 2015-08-03 at 12 49 36 am" src="https://cloud.githubusercontent.com/assets/1305229/9033139/935af686-3979-11e5-8286-e944a362b755.png">
<img width="1348" alt="screen shot 2015-08-03 at 12 49 42 am" src="https://cloud.githubusercontent.com/assets/1305229/9033138/935ad7d2-3979-11e5-98ef-34962db35ff0.png">
I'll track these changes locally on a branch for now.
Now there is not visual divide between each section, so does anyone have any idea on how to accomplish this?
| 1.0 | colors......... - @ingachen @johnlin1214 @achiu8 -
nat said she wanted less colors and have it be mostly grey so i played around and came up with something like this, which she likes better:
<img width="1423" alt="screen shot 2015-08-03 at 12 49 28 am" src="https://cloud.githubusercontent.com/assets/1305229/9033140/935c99b4-3979-11e5-93ce-a2dd14811834.png">
<img width="1388" alt="screen shot 2015-08-03 at 12 49 36 am" src="https://cloud.githubusercontent.com/assets/1305229/9033139/935af686-3979-11e5-8286-e944a362b755.png">
<img width="1348" alt="screen shot 2015-08-03 at 12 49 42 am" src="https://cloud.githubusercontent.com/assets/1305229/9033138/935ad7d2-3979-11e5-98ef-34962db35ff0.png">
I'll track these changes locally on a branch for now.
Now there is not visual divide between each section, so does anyone have any idea on how to accomplish this?
| non_main | colors ingachen nat said she wanted less colors and have it be mostly grey so i played around and came up with something like this which she likes better img width alt screen shot at am src img width alt screen shot at am src img width alt screen shot at am src i ll track these changes locally on a branch for now now there is not visual divide between each section so does anyone have any idea on how to accomplish this | 0 |
2,448 | 8,639,862,133 | IssuesEvent | 2018-11-23 22:10:15 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | pifm modulation makes audio too quiet | V1 related (not maintained) | So, I'm trying to make a short-range FM audio transmitter to play music in my car. I've gotten everything set up and working, and it can broadcast a pretty powerful FM signal. I'm using an RTL-SDR on my PC to test it out, and there's one problem. Despite the signal being strong, the volume of audio is really low compared to normal FM stations. I can't find much information on Google about it, but I'm pretty sure it's a problem with the modulation. Comparing the signal from the Pi (left) and a local radio channel (right), you can see how the bandwidth of the Pi's signal is much smaller.

Mind, I can still hear it fine, I just have to turn the volume way up. But that's a problem when I then change to a different channel and blow my ears out. Just another example, I took audio clips from both stations at my SDR's "normal" volume level, and while I can hear the local channel fine, the Pi's is barely audible.

I've tried it with a few different music files, and editing them with Audacity to increase the volume, but it doesn't seem to make a difference. Do you have any suggestions on how I could fix this? Thanks!
Edit: Using NFM instead of WFM makes it slightly better, but the bandwidth is actually more than 32khz, so the audio sounds pretty bad. | True | pifm modulation makes audio too quiet - So, I'm trying to make a short-range FM audio transmitter to play music in my car. I've gotten everything set up and working, and it can broadcast a pretty powerful FM signal. I'm using an RTL-SDR on my PC to test it out, and there's one problem. Despite the signal being strong, the volume of audio is really low compared to normal FM stations. I can't find much information on Google about it, but I'm pretty sure it's a problem with the modulation. Comparing the signal from the Pi (left) and a local radio channel (right), you can see how the bandwidth of the Pi's signal is much smaller.

Mind, I can still hear it fine, I just have to turn the volume way up. But that's a problem when I then change to a different channel and blow my ears out. Just another example, I took audio clips from both stations at my SDR's "normal" volume level, and while I can hear the local channel fine, the Pi's is barely audible.

I've tried it with a few different music files, and editing them with Audacity to increase the volume, but it doesn't seem to make a difference. Do you have any suggestions on how I could fix this? Thanks!
Edit: Using NFM instead of WFM makes it slightly better, but the bandwidth is actually more than 32khz, so the audio sounds pretty bad. | main | pifm modulation makes audio too quiet so i m trying to make a short range fm audio transmitter to play music in my car i ve gotten everything set up and working and it can broadcast a pretty powerful fm signal i m using an rtl sdr on my pc to test it out and there s one problem despite the signal being strong the volume of audio is really low compared to normal fm stations i can t find much information on google about it but i m pretty sure it s a problem with the modulation comparing the signal from the pi left and a local radio channel right you can see how the bandwidth of the pi s signal is much smaller mind i can still hear it fine i just have to turn the volume way up but that s a problem when i then change to a different channel and blow my ears out just another example i took audio clips from both stations at my sdr s normal volume level and while i can hear the local channel fine the pi s is barely audible i ve tried it with a few different music files and editing them with audacity to increase the volume but it doesn t seem to make a difference do you have any suggestions on how i could fix this thanks edit using nfm instead of wfm makes it slightly better but the bandwidth is actually more than so the audio sounds pretty bad | 1 |
60,238 | 25,044,946,707 | IssuesEvent | 2022-11-05 05:28:54 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Some links are not linking to expected content | cognitive-services/svc triaged cxp product-issue Pri2 awaiting-customer-response | The links on this page is expected to take the user to a page with specific information on what languages are supported by the service.
Clicking on the "Language service" link navigates to this page:
https://learn.microsoft.com/en-us/azure/cognitive-services/language-service/language-detection/overview
This is a very general page and does not contain details on the languages supported.
I think there are a couple of more links on the page with a similar problem.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 1dde985d-e35f-83a6-2765-252a0fa4fa66
* Version Independent ID: 4cd65ebc-55c0-79bf-dd13-401df5cd2aef
* Content: [Language support - Azure Cognitive Services](https://learn.microsoft.com/en-us/azure/cognitive-services/language-support)
* Content Source: [articles/cognitive-services/language-support.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cognitive-services/language-support.md)
* Service: **cognitive-services**
* GitHub Login: @PatrickFarley
* Microsoft Alias: **pafarley** | 1.0 | Some links are not linking to expected content - The links on this page is expected to take the user to a page with specific information on what languages are supported by the service.
Clicking on the "Language service" link navigates to this page:
https://learn.microsoft.com/en-us/azure/cognitive-services/language-service/language-detection/overview
This is a very general page and does not contain details on the languages supported.
I think there are a couple of more links on the page with a similar problem.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 1dde985d-e35f-83a6-2765-252a0fa4fa66
* Version Independent ID: 4cd65ebc-55c0-79bf-dd13-401df5cd2aef
* Content: [Language support - Azure Cognitive Services](https://learn.microsoft.com/en-us/azure/cognitive-services/language-support)
* Content Source: [articles/cognitive-services/language-support.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cognitive-services/language-support.md)
* Service: **cognitive-services**
* GitHub Login: @PatrickFarley
* Microsoft Alias: **pafarley** | non_main | some links are not linking to expected content the links on this page is expected to take the user to a page with specific information on what languages are supported by the service clicking on the language service link navigates to this page this is a very general page and does not contain details on the languages supported i think there are a couple of more links on the page with a similar problem document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service cognitive services github login patrickfarley microsoft alias pafarley | 0 |
710,852 | 24,439,239,196 | IssuesEvent | 2022-10-06 13:35:58 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.domestika.org - video or audio doesn't play | priority-normal browser-fenix engine-gecko | <!-- @browser: Firefox Mobile 106.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:106.0) Gecko/106.0 Firefox/106.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111923 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.domestika.org/en/courses/3439-course-3-make-smudge-and-erase-marks/units/10501-brush-studio
**Browser / Version**: Firefox Mobile 106.0
**Operating System**: Android 10
**Tested Another Browser**: Yes Other
**Problem type**: Video or audio doesn't play
**Description**: The video or audio does not play
**Steps to Reproduce**:
The site loaded completely and I accepted the cookies but the video won't play. Refreshing the page also doesn't help. Even opening in a new tab doesn't work either.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/10/88e68c37-86a5-405f-8ea5-4f90afa9b4eb.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20221002185807</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/10/33b0aab7-07ea-4a3e-b803-7776e2486e22)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.domestika.org - video or audio doesn't play - <!-- @browser: Firefox Mobile 106.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:106.0) Gecko/106.0 Firefox/106.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111923 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.domestika.org/en/courses/3439-course-3-make-smudge-and-erase-marks/units/10501-brush-studio
**Browser / Version**: Firefox Mobile 106.0
**Operating System**: Android 10
**Tested Another Browser**: Yes Other
**Problem type**: Video or audio doesn't play
**Description**: The video or audio does not play
**Steps to Reproduce**:
The site loaded completely and I accepted the cookies but the video won't play. Refreshing the page also doesn't help. Even opening in a new tab doesn't work either.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/10/88e68c37-86a5-405f-8ea5-4f90afa9b4eb.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20221002185807</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/10/33b0aab7-07ea-4a3e-b803-7776e2486e22)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | video or audio doesn t play url browser version firefox mobile operating system android tested another browser yes other problem type video or audio doesn t play description the video or audio does not play steps to reproduce the site loaded completely and i accepted the cookies but the video won t play refreshing the page also doesn t help even opening in a new tab doesn t work either view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
5,652 | 28,808,056,708 | IssuesEvent | 2023-05-03 00:35:13 | googleforgames/agones | https://api.github.com/repos/googleforgames/agones | closed | When NodeIp only has InternalIp, is it possible to provide a third way to expose services, such as through LB (VIP: VPort)? | help wanted good first issue question awaiting-maintainer | **Is your feature request related to a problem? Please describe.**
When an Agones is deployed privately and NodeIp only has InternalIp, the existing port exposure method cannot be accessed outside the cluster
**Describe the solution you'd like**
When an Agones is deployed privately and NodeIp only has InternalIp, is it possible to configure an LB according to yaml to access through LB (VIP: VIP: VPort) -> GameServerPod
**Describe alternatives you've considered**
When an Agones is deployed privately and the NodeIp only has InternalIp, GameServerAllocation can only get the InternalIp: Port of a GameServer, but it is still inaccessible outside the cluster. Therefore, consider mapping this GameServerPod: LB VIP: VPort to DS Container Port, so that the Client can access through VIP: VPort.
**Additional context**
clinet->LB->GameServerPod
| True | When NodeIp only has InternalIp, is it possible to provide a third way to expose services, such as through LB (VIP: VPort)? - **Is your feature request related to a problem? Please describe.**
When an Agones is deployed privately and NodeIp only has InternalIp, the existing port exposure method cannot be accessed outside the cluster
**Describe the solution you'd like**
When an Agones is deployed privately and NodeIp only has InternalIp, is it possible to configure an LB according to yaml to access through LB (VIP: VIP: VPort) -> GameServerPod
**Describe alternatives you've considered**
When an Agones is deployed privately and the NodeIp only has InternalIp, GameServerAllocation can only get the InternalIp: Port of a GameServer, but it is still inaccessible outside the cluster. Therefore, consider mapping this GameServerPod: LB VIP: VPort to DS Container Port, so that the Client can access through VIP: VPort.
**Additional context**
clinet->LB->GameServerPod
| main | when nodeip only has internalip is it possible to provide a third way to expose services such as through lb vip vport is your feature request related to a problem please describe when an agones is deployed privately and nodeip only has internalip the existing port exposure method cannot be accessed outside the cluster describe the solution you d like when an agones is deployed privately and nodeip only has internalip is it possible to configure an lb according to yaml to access through lb vip vip vport gameserverpod describe alternatives you ve considered when an agones is deployed privately and the nodeip only has internalip gameserverallocation can only get the internalip port of a gameserver but it is still inaccessible outside the cluster therefore consider mapping this gameserverpod lb vip vport to ds container port so that the client can access through vip vport additional context clinet lb gameserverpod | 1 |
4,799 | 24,734,089,976 | IssuesEvent | 2022-10-20 20:14:22 | bazelbuild/intellij | https://api.github.com/repos/bazelbuild/intellij | closed | Absolute main repo target expressions aren't accepted | type: bug product: IntelliJ awaiting-maintainer | ### Description of the bug:
Since https://github.com/bazelbuild/bazel/commit/b1113f801da87ed2e089ff4122aa13400c177804, the labels Bazel emits for targets in the main repo often start with `@//`. However, such targets aren't accepted by the IntelliJ plugin with the following error:
```
Error: Invalid target expression `@//foo:test`: Couldn't find package path
```
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Try to create a run configuration for `@//base:unit_tests` in the intellij repo.
### Which Intellij IDE are you using? Please provide the specific version.
IntelliJ Ultimate 2022.2.3
### What programming languages and tools are you using? Please provide specific versions.
Java
### What Bazel plugin version are you using?
2022.09.20.0.1-api-version-222
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_ | True | Absolute main repo target expressions aren't accepted - ### Description of the bug:
Since https://github.com/bazelbuild/bazel/commit/b1113f801da87ed2e089ff4122aa13400c177804, the labels Bazel emits for targets in the main repo often start with `@//`. However, such targets aren't accepted by the IntelliJ plugin with the following error:
```
Error: Invalid target expression `@//foo:test`: Couldn't find package path
```
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Try to create a run configuration for `@//base:unit_tests` in the intellij repo.
### Which Intellij IDE are you using? Please provide the specific version.
IntelliJ Ultimate 2022.2.3
### What programming languages and tools are you using? Please provide specific versions.
Java
### What Bazel plugin version are you using?
2022.09.20.0.1-api-version-222
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_ | main | absolute main repo target expressions aren t accepted description of the bug since the labels bazel emits for targets in the main repo often start with however such targets aren t accepted by the intellij plugin with the following error error invalid target expression foo test couldn t find package path what s the simplest easiest way to reproduce this bug please provide a minimal example if possible try to create a run configuration for base unit tests in the intellij repo which intellij ide are you using please provide the specific version intellij ultimate what programming languages and tools are you using please provide specific versions java what bazel plugin version are you using api version have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response | 1 |
681,599 | 23,317,256,027 | IssuesEvent | 2022-08-08 13:32:17 | wasmerio/wasmer | https://api.github.com/repos/wasmerio/wasmer | closed | Add support for Zig cc when compiling with `create-exe` | priority-high create-exe | We should add support for `zig cc` when compiling (if zig is available) instead of using clang, as it's better for cross-compilation. | 1.0 | Add support for Zig cc when compiling with `create-exe` - We should add support for `zig cc` when compiling (if zig is available) instead of using clang, as it's better for cross-compilation. | non_main | add support for zig cc when compiling with create exe we should add support for zig cc when compiling if zig is available instead of using clang as it s better for cross compilation | 0 |
17,356 | 12,308,617,938 | IssuesEvent | 2020-05-12 07:33:36 | Altinn/altinn-studio | https://api.github.com/repos/Altinn/altinn-studio | closed | Enable DeamonSet for Platform Apps | kind/user-story ops/infrastructure | Need to enable DeamonSet for Platform components, first in AT
Ensure an instance of each component on each node in the cluster
The DeamonSet must be configured with the defaults, no restrictions to run on spesific nodes.
## Considerations
- First start with a single component and check that there is one component pod on each node.
- Then test with the remaining platform component.
## Acceptance criteria
- [x] if a cluster has multiple nodes, an instance of each deployment should be present on each node in AT24.
## Tasks
- [x] Create template for daemonset.yaml in Altinn Studio Ops : https://dev.azure.com/brreg/_git/altinn-studio-ops?path=%2Fdeploy%2Faltinn-platform%2Faltinn-storage%2Ftemplates
- [x] Manual test
- [x] Documentation
Kubernetes documentation:
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Example:
https://www.magalix.com/blog/kubernetes-daemonsets-101
Kured is also deployed as an DeamonSet. | 1.0 | Enable DeamonSet for Platform Apps - Need to enable DeamonSet for Platform components, first in AT
Ensure an instance of each component on each node in the cluster
The DeamonSet must be configured with the defaults, no restrictions to run on spesific nodes.
## Considerations
- First start with a single component and check that there is one component pod on each node.
- Then test with the remaining platform component.
## Acceptance criteria
- [x] if a cluster has multiple nodes, an instance of each deployment should be present on each node in AT24.
## Tasks
- [x] Create template for daemonset.yaml in Altinn Studio Ops : https://dev.azure.com/brreg/_git/altinn-studio-ops?path=%2Fdeploy%2Faltinn-platform%2Faltinn-storage%2Ftemplates
- [x] Manual test
- [x] Documentation
Kubernetes documentation:
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Example:
https://www.magalix.com/blog/kubernetes-daemonsets-101
Kured is also deployed as an DeamonSet. | non_main | enable deamonset for platform apps need to enable deamonset for platform components first in at ensure an instance of each component on each node in the cluster the deamonset must be configured with the defaults no restrictions to run on spesific nodes considerations first start with a single component and check that there is one component pod on each node then test with the remaining platform component acceptance criteria if a cluster has multiple nodes an instance of each deployment should be present on each node in tasks create template for daemonset yaml in altinn studio ops manual test documentation kubernetes documentation example kured is also deployed as an deamonset | 0 |
2,711 | 9,531,849,487 | IssuesEvent | 2019-04-29 17:01:47 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | [request] Add tickbox to disable notifications. | unmaintained | when using auto update database notifications happen quite often id like to be able to turn them off | True | [request] Add tickbox to disable notifications. - when using auto update database notifications happen quite often id like to be able to turn them off | main | add tickbox to disable notifications when using auto update database notifications happen quite often id like to be able to turn them off | 1 |
51,775 | 10,723,770,774 | IssuesEvent | 2019-10-27 21:01:10 | comphack/comp_hack | https://api.github.com/repos/comphack/comp_hack | closed | Parallel Boss Defeat Bug | bug code | (On the Re:Imagine server if this matters at all)
At seemingly random times since the latest content release, every single one of my/my demons abilities will just seem to stop working. I can click a skill, click a demon to summon/desummon, anything. It will look like it worked but it does absolutely nothing except let me incant the skill over and over to no effect. I just got a report in that this was happening to another player as well. The only similar thing I seem to get is "it happens after lag" or "it happens after zoning" and I'm having difficulty recreating it. The only way to fix it is to log off.
Sorry for just a blob of ranting instead of a real way to get this to happen, but that's all I got so far. Hopefully I'll have it happen again or get more reports of it soon. | 1.0 | Parallel Boss Defeat Bug - (On the Re:Imagine server if this matters at all)
At seemingly random times since the latest content release, every single one of my/my demons abilities will just seem to stop working. I can click a skill, click a demon to summon/desummon, anything. It will look like it worked but it does absolutely nothing except let me incant the skill over and over to no effect. I just got a report in that this was happening to another player as well. The only similar thing I seem to get is "it happens after lag" or "it happens after zoning" and I'm having difficulty recreating it. The only way to fix it is to log off.
Sorry for just a blob of ranting instead of a real way to get this to happen, but that's all I got so far. Hopefully I'll have it happen again or get more reports of it soon. | non_main | parallel boss defeat bug on the re imagine server if this matters at all at seemingly random times since the latest content release every single one of my my demons abilities will just seem to stop working i can click a skill click a demon to summon desummon anything it will look like it worked but it does absolutely nothing except let me incant the skill over and over to no effect i just got a report in that this was happening to another player as well the only similar thing i seem to get is it happens after lag or it happens after zoning and i m having difficulty recreating it the only way to fix it is to log off sorry for just a blob of ranting instead of a real way to get this to happen but that s all i got so far hopefully i ll have it happen again or get more reports of it soon | 0 |
59,570 | 7,261,489,605 | IssuesEvent | 2018-02-18 21:15:58 | PalouseRobosub/robosub | https://api.github.com/repos/PalouseRobosub/robosub | closed | Create Pull Request for MVP Particle Filter Code | Localization (Senior Design) | Create a pull request that moves the current particle filter code to master | 1.0 | Create Pull Request for MVP Particle Filter Code - Create a pull request that moves the current particle filter code to master | non_main | create pull request for mvp particle filter code create a pull request that moves the current particle filter code to master | 0 |
4,387 | 22,330,604,956 | IssuesEvent | 2022-06-14 14:15:08 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | Add tooltip (title) to Clear icon button in Search component | type: enhancement 💡 role: ux 🍿 proposal: open status: waiting for maintainer response 💬 | _(note for maintainers, this was opened as a bug and converted to an enhancement proposal)_
### Package
carbon-components-react
### Browser
Chrome, Safari, Firefox, Edge
### Package version
v10.57.1
### React version
v18.2.0
### Description
There should be a tooltip for clear icon in the Search component as describe at:
https://github.com/carbon-design-system/carbon/issues/10233
### CodeSandbox example
Use the sandbox from live demo https://carbondesignsystem.com/components/search/usage#live-demo
### Steps to reproduce
1. Launch Sandbox from live demo (or use live demo).
2. Enter something in search input.
3. Hover over X (or clear).
When you hover over x (or clear), you should see a tooltip (title).
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | Add tooltip (title) to Clear icon button in Search component - _(note for maintainers, this was opened as a bug and converted to an enhancement proposal)_
### Package
carbon-components-react
### Browser
Chrome, Safari, Firefox, Edge
### Package version
v10.57.1
### React version
v18.2.0
### Description
There should be a tooltip for clear icon in the Search component as describe at:
https://github.com/carbon-design-system/carbon/issues/10233
### CodeSandbox example
Use the sandbox from live demo https://carbondesignsystem.com/components/search/usage#live-demo
### Steps to reproduce
1. Launch Sandbox from live demo (or use live demo).
2. Enter something in search input.
3. Hover over X (or clear).
When you hover over x (or clear), you should see a tooltip (title).
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | add tooltip title to clear icon button in search component note for maintainers this was opened as a bug and converted to an enhancement proposal package carbon components react browser chrome safari firefox edge package version react version description there should be a tooltip for clear icon in the search component as describe at codesandbox example use the sandbox from live demo steps to reproduce launch sandbox from live demo or use live demo enter something in search input hover over x or clear when you hover over x or clear you should see a tooltip title code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
3,524 | 13,850,811,885 | IssuesEvent | 2020-10-15 02:16:46 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | opened | Maintainer change request | Maintainer change request | I'd like to request admin access on the `voting_api` project. It looks like I am the maintainer and I was going to merge a PR but it says I don't have access:

https://github.com/backdrop-contrib/votingapi | True | Maintainer change request - I'd like to request admin access on the `voting_api` project. It looks like I am the maintainer and I was going to merge a PR but it says I don't have access:

https://github.com/backdrop-contrib/votingapi | main | maintainer change request i d like to request admin access on the voting api project it looks like i am the maintainer and i was going to merge a pr but it says i don t have access | 1 |
4,454 | 23,170,286,680 | IssuesEvent | 2022-07-30 15:51:15 | JENOT-ANT/ENIGMA | https://api.github.com/repos/JENOT-ANT/ENIGMA | closed | Wczytywanie 'krążków' z pliku | dev task(s) waiting-for-maintainers | Dopisać do ***[libraries > enigma.h](https://github.com/JENOT-ANT/ENIGMA/blob/main/libraries/enigma.h) > Disk > load*** metodę (funkcję) odpowiedzialną za wczytywanie opisu krążka z pliku tekstowego.
Funkcja powinna:
- przyjmować jako argument nazwę/ścieżkę pliku,
- generować tablicę opisującą krążek ze wczytanych danych,
- obliczać wielkość krążka,
- wywoływać metodę ***init*** krążka dla wygenerowanej tablicy i obliczonej wielkości krążka. | True | Wczytywanie 'krążków' z pliku - Dopisać do ***[libraries > enigma.h](https://github.com/JENOT-ANT/ENIGMA/blob/main/libraries/enigma.h) > Disk > load*** metodę (funkcję) odpowiedzialną za wczytywanie opisu krążka z pliku tekstowego.
Funkcja powinna:
- przyjmować jako argument nazwę/ścieżkę pliku,
- generować tablicę opisującą krążek ze wczytanych danych,
- obliczać wielkość krążka,
- wywoływać metodę ***init*** krążka dla wygenerowanej tablicy i obliczonej wielkości krążka. | main | wczytywanie krążków z pliku dopisać do disk load metodę funkcję odpowiedzialną za wczytywanie opisu krążka z pliku tekstowego funkcja powinna przyjmować jako argument nazwę ścieżkę pliku generować tablicę opisującą krążek ze wczytanych danych obliczać wielkość krążka wywoływać metodę init krążka dla wygenerowanej tablicy i obliczonej wielkości krążka | 1 |
461,533 | 13,231,843,646 | IssuesEvent | 2020-08-18 12:24:29 | eclipse/codewind | https://api.github.com/repos/eclipse/codewind | closed | Unable to update .cw-settings for non-debugable project | area/portal kind/bug priority/hot | <!-- Please fill out the following form to report a bug. If some fields do not apply to your situation, feel free to skip them.-->
**Codewind version:** 0.13.0. (But 0.12.0 is broken as well)
**OS:** macOS
**IDE extension version:** 0.13.0
**IDE version:** VS Code 1.46.0
**Description:**
I tried importing the following microservice into Codewind: https://github.com/sqshq/piggymetrics/tree/master/statistics-service
Since the project is required to be built manually, outside of the Docker build, I ran a `mvn package` before importing it into Codewind. Once I imported the project into Codewind (**Generic Docker, Java**), I tried to update the `.cw-settings` file so that the `target/` folder could be synced (so that the built jar could be picked up by the docker build in Codewind).
However, when I saved the file, I was shown the following error:
<img width="458" alt="Screen Shot 2020-06-10 at 4 28 35 PM" src="https://user-images.githubusercontent.com/6880023/84315302-799b6700-ab37-11ea-8d5a-ae777d20a0f9.png">
**Steps to reproduce:**
1. Git clone https://github.com/sqshq/piggymetrics
2. cd into `statistics-service` and run `mvn package`
3. Import the `statistics-service` folder into Codewind as a Generic Docker, Java project **not** Spring)
4. Build will fail
5. Modify `.cw-settings` to remove the `target/` folder from the list of ignored paths. You will get the error I screenshotted above
**Workaround:**
Importing the project as a Codewind spring project didn't show that error, but then I hit other issues, since this microservice isn't formatted to work with the Codewind spring template.
| 1.0 | Unable to update .cw-settings for non-debugable project - <!-- Please fill out the following form to report a bug. If some fields do not apply to your situation, feel free to skip them.-->
**Codewind version:** 0.13.0. (But 0.12.0 is broken as well)
**OS:** macOS
**IDE extension version:** 0.13.0
**IDE version:** VS Code 1.46.0
**Description:**
I tried importing the following microservice into Codewind: https://github.com/sqshq/piggymetrics/tree/master/statistics-service
Since the project is required to be built manually, outside of the Docker build, I ran a `mvn package` before importing it into Codewind. Once I imported the project into Codewind (**Generic Docker, Java**), I tried to update the `.cw-settings` file so that the `target/` folder could be synced (so that the built jar could be picked up by the docker build in Codewind).
However, when I saved the file, I was shown the following error:
<img width="458" alt="Screen Shot 2020-06-10 at 4 28 35 PM" src="https://user-images.githubusercontent.com/6880023/84315302-799b6700-ab37-11ea-8d5a-ae777d20a0f9.png">
**Steps to reproduce:**
1. Git clone https://github.com/sqshq/piggymetrics
2. cd into `statistics-service` and run `mvn package`
3. Import the `statistics-service` folder into Codewind as a Generic Docker, Java project **not** Spring)
4. Build will fail
5. Modify `.cw-settings` to remove the `target/` folder from the list of ignored paths. You will get the error I screenshotted above
**Workaround:**
Importing the project as a Codewind spring project didn't show that error, but then I hit other issues, since this microservice isn't formatted to work with the Codewind spring template.
| non_main | unable to update cw settings for non debugable project codewind version but is broken as well os macos ide extension version ide version vs code description i tried importing the following microservice into codewind since the project is required to be built manually outside of the docker build i ran a mvn package before importing it into codewind once i imported the project into codewind generic docker java i tried to update the cw settings file so that the target folder could be synced so that the built jar could be picked up by the docker build in codewind however when i saved the file i was shown the following error img width alt screen shot at pm src steps to reproduce git clone cd into statistics service and run mvn package import the statistics service folder into codewind as a generic docker java project not spring build will fail modify cw settings to remove the target folder from the list of ignored paths you will get the error i screenshotted above workaround importing the project as a codewind spring project didn t show that error but then i hit other issues since this microservice isn t formatted to work with the codewind spring template | 0 |
64,932 | 3,220,022,459 | IssuesEvent | 2015-10-08 13:06:38 | awesome-raccoons/gqt | https://api.github.com/repos/awesome-raccoons/gqt | closed | Extra lines in polygons with holes | bug medium priority | There's a line connecting the inner and outer ring in polygon((0 0, 10 0, 10 10, 0 10, 0 0),(4 4, 4 6, 6 6, 4 4)) | 1.0 | Extra lines in polygons with holes - There's a line connecting the inner and outer ring in polygon((0 0, 10 0, 10 10, 0 10, 0 0),(4 4, 4 6, 6 6, 4 4)) | non_main | extra lines in polygons with holes there s a line connecting the inner and outer ring in polygon | 0 |
9,299 | 8,583,602,416 | IssuesEvent | 2018-11-13 20:12:44 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | aws_eip data source missing support for tags | enhancement good first issue help wanted service/ec2 | _This issue was originally opened by @MaximF as hashicorp/terraform#17361. It was migrated here as a result of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
### Terraform Version
```
> terraform -v
Terraform v0.11.3
+ provider.aws v1.9.0
```
### Terraform Configuration Files
```hcl
data "aws_eip" "sample" {
tags {
Name ="A valuable IP"
}
}
```
### Crash Output
```hcl
Error: data.aws_eip.sample: : invalid or unknown key: tags
```
### Expected Behavior
Terraform would return an IP with a described "Name" tag value
### Actual Behavior
Error: data.aws_eip.sample: : invalid or unknown key: tags
### Steps to Reproduce
1. `terraform init`
2. `terraform apply`
### References
The same issue was fixed for `aws_eip` resource just over a month ago at hashicorp/terraform#16993.
Now it is a turn for `aws_eip` data source
| 1.0 | aws_eip data source missing support for tags - _This issue was originally opened by @MaximF as hashicorp/terraform#17361. It was migrated here as a result of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
### Terraform Version
```
> terraform -v
Terraform v0.11.3
+ provider.aws v1.9.0
```
### Terraform Configuration Files
```hcl
data "aws_eip" "sample" {
tags {
Name ="A valuable IP"
}
}
```
### Crash Output
```hcl
Error: data.aws_eip.sample: : invalid or unknown key: tags
```
### Expected Behavior
Terraform would return an IP with a described "Name" tag value
### Actual Behavior
Error: data.aws_eip.sample: : invalid or unknown key: tags
### Steps to Reproduce
1. `terraform init`
2. `terraform apply`
### References
The same issue was fixed for `aws_eip` resource just over a month ago at hashicorp/terraform#16993.
Now it is a turn for `aws_eip` data source
| non_main | aws eip data source missing support for tags this issue was originally opened by maximf as hashicorp terraform it was migrated here as a result of the the original body of the issue is below terraform version terraform v terraform provider aws terraform configuration files hcl data aws eip sample tags name a valuable ip crash output hcl error data aws eip sample invalid or unknown key tags expected behavior terraform would return an ip with a described name tag value actual behavior error data aws eip sample invalid or unknown key tags steps to reproduce terraform init terraform apply references the same issue was fixed for aws eip resource just over a month ago at hashicorp terraform now it is a turn for aws eip data source | 0 |
12 | 2,515,086,652 | IssuesEvent | 2015-01-15 16:22:33 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | opened | Remove the SimpleSAML_Auth_Default class | enhancement maintainability | This one is tricky, as there's lots of code currently in use. Try to reallocate those parts into `SimpleSAML_Auth_Source` and `SimpleSAML_Session`. | True | Remove the SimpleSAML_Auth_Default class - This one is tricky, as there's lots of code currently in use. Try to reallocate those parts into `SimpleSAML_Auth_Source` and `SimpleSAML_Session`. | main | remove the simplesaml auth default class this one is tricky as there s lots of code currently in use try to reallocate those parts into simplesaml auth source and simplesaml session | 1 |
4,071 | 19,184,096,143 | IssuesEvent | 2021-12-04 22:47:10 | timkendall/tql | https://api.github.com/repos/timkendall/tql | closed | Improve codegen implementation | maintainability | Use the graphql visitor API and a TS AST builder (instead of manually constructing strings).
Examples:
- https://github.com/dotansimha/graphql-code-generator/blob/01d2329604adbca230ca9513764b228b239eaf07/packages/plugins/other/visitor-plugin-common/src/base-types-visitor.ts#L275
- https://github.com/dotansimha/graphql-code-generator/blob/01d2329604adbca230ca9513764b228b239eaf07/packages/plugins/other/visitor-plugin-common/src/utils.ts#L109
- https://github.com/dotansimha/graphql-code-generator/blob/master/packages/plugins/typescript/typescript/src/visitor.ts#L49
- https://github.com/dotansimha/graphql-code-generator/blob/01d2329604adbca230ca9513764b228b239eaf07/packages/plugins/java/kotlin/src/visitor.ts#L50
Some useful TS codegen libs:
- https://babeljs.io/docs/en/babel-types
- https://github.com/stephenh/ts-poet
- https://github.com/dsherret/ts-morph/tree/latest/packages/ts-morph
If we rewrite codegen in another language:
- https://github.com/carllerche/codegen
- https://github.com/apollographql/apollo-rs
- https://github.com/benjamn/ast-types
- https://github.com/swc-project/swc/blob/main/crates/swc_ecma_codegen/src/typescript.rs
- https://github.com/Jarred-Sumner/bun/blob/main/src/js_parser/js_parser.zig
Debuggers:
- https://astexplorer.net/
- https://ts-ast-viewer.com/# | True | Improve codegen implementation - Use the graphql visitor API and a TS AST builder (instead of manually constructing strings).
Examples:
- https://github.com/dotansimha/graphql-code-generator/blob/01d2329604adbca230ca9513764b228b239eaf07/packages/plugins/other/visitor-plugin-common/src/base-types-visitor.ts#L275
- https://github.com/dotansimha/graphql-code-generator/blob/01d2329604adbca230ca9513764b228b239eaf07/packages/plugins/other/visitor-plugin-common/src/utils.ts#L109
- https://github.com/dotansimha/graphql-code-generator/blob/master/packages/plugins/typescript/typescript/src/visitor.ts#L49
- https://github.com/dotansimha/graphql-code-generator/blob/01d2329604adbca230ca9513764b228b239eaf07/packages/plugins/java/kotlin/src/visitor.ts#L50
Some useful TS codegen libs:
- https://babeljs.io/docs/en/babel-types
- https://github.com/stephenh/ts-poet
- https://github.com/dsherret/ts-morph/tree/latest/packages/ts-morph
If we rewrite codegen in another language:
- https://github.com/carllerche/codegen
- https://github.com/apollographql/apollo-rs
- https://github.com/benjamn/ast-types
- https://github.com/swc-project/swc/blob/main/crates/swc_ecma_codegen/src/typescript.rs
- https://github.com/Jarred-Sumner/bun/blob/main/src/js_parser/js_parser.zig
Debuggers:
- https://astexplorer.net/
- https://ts-ast-viewer.com/# | main | improve codegen implementation use the graphql visitor api and a ts ast builder instead of manually constructing strings examples some useful ts codegen libs if we rewrite codegen in another language debuggers | 1 |
4,574 | 23,766,506,729 | IssuesEvent | 2022-09-01 13:12:45 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: Sass variables not wrapped causing warning on react build | severity: 3 type: bug 🐛 role: dev 🤖 status: waiting for maintainer response 💬 | ### Package
@carbon/react
### Browser
_No response_
### Package version
1.11.0
### React version
17.2.0
### Description
I have warnings at application build caused by sass variables.
```
static/css/main.59a21da1.css from Css Minimizer plugin
postcss-calc: C:\Users\Julie\Documents\wikit\wikit-v2\wikit-console-v2\static\css\main.59a21da1.css:7353:421986: Lexical error on line 1: Unrecognized text.
Erroneous area:
1: 100% + $popover-offset
^.........^
```
After some research it seems that some variables are not wrapped in calc functions.
Affected sass files:
- In popover.scss file :
$popover-offset should be wrapped in calc functions .
For example: `transform: translate(calc(-1 * $popover-offset + 100%), -50%);`
should be: `transform: translate(calc(-1 * #{$popover-offset} + 100%), -50%);`
- modal.scss
variable spacing-07 should be wrapped in calc functions too
### Reproduction/example
-
### Steps to reproduce
- Create a react project with create react app and last react scripts version (5.0.1)
- import carbon react css (@use '@carbon/react';)
- build project (npm run build)
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: Sass variables not wrapped causing warning on react build - ### Package
@carbon/react
### Browser
_No response_
### Package version
1.11.0
### React version
17.2.0
### Description
I have warnings at application build caused by sass variables.
```
static/css/main.59a21da1.css from Css Minimizer plugin
postcss-calc: C:\Users\Julie\Documents\wikit\wikit-v2\wikit-console-v2\static\css\main.59a21da1.css:7353:421986: Lexical error on line 1: Unrecognized text.
Erroneous area:
1: 100% + $popover-offset
^.........^
```
After some research it seems that some variables are not wrapped in calc functions.
Affected sass files:
- In popover.scss file :
$popover-offset should be wrapped in calc functions .
For example: `transform: translate(calc(-1 * $popover-offset + 100%), -50%);`
should be: `transform: translate(calc(-1 * #{$popover-offset} + 100%), -50%);`
- modal.scss
variable spacing-07 should be wrapped in calc functions too
### Reproduction/example
-
### Steps to reproduce
- Create a react project with create react app and last react scripts version (5.0.1)
- import carbon react css (@use '@carbon/react';)
- build project (npm run build)
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | sass variables not wrapped causing warning on react build package carbon react browser no response package version react version description i have warnings at application build caused by sass variables static css main css from css minimizer plugin postcss calc c users julie documents wikit wikit wikit console static css main css lexical error on line unrecognized text erroneous area popover offset after some research it seems that some variables are not wrapped in calc functions affected sass files in popover scss file popover offset should be wrapped in calc functions for example transform translate calc popover offset should be transform translate calc popover offset modal scss variable spacing should be wrapped in calc functions too reproduction example steps to reproduce create a react project with create react app and last react scripts version import carbon react css use carbon react build project npm run build code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
421,430 | 28,315,941,183 | IssuesEvent | 2023-04-10 19:35:55 | WayScience/pyroptosis_signature | https://api.github.com/repos/WayScience/pyroptosis_signature | opened | Power surge causing CellProfiler to stop mid-run | documentation | Due to a power surge/outage during the SH-SY5Y run and the computer not plugged in properly to the UPS (uninterrupted power supply), the SQLite file is incomplete.
Since CellProfiler does not have the ability to pick up a run where it left off, that means to avoid spending more computational power and time rerunning the same images, we can split the LoadData CSV for this cell type into two parts:
a. Part A, where these are the images that were run
b. Part B, where these are the images that still need to be processed
We know the image set that CellProfiler stopped at using the log file from the first run. Since the LoadData CSV has the same number of rows as the images sets, we can split the data frame by row index as seen below:
```python
def split_loaddata_csv_by_row(
path_to_loadata: pathlib.Path,
output_dir: pathlib.Path,
row_index_val: int,
first_csv_name: str,
second_csv_name: str,
):
"""
This function will split a LoadData CSV in half (two groups) based on columns into two different CSVs.
This is can used for when you have different cell types on the same plate.
Parameters
----------
path_to_loadata : pathlib.Path
path to the LoadData CSV with IC functions to be edited
output_dir : pathlib.Path
path to directory where new LoadData CSVs will be saved to
row_index_val : int
index value to separate
first_csv_name : str
name of the LoadData CSV for the first group of the plate (name should include loaddata and state
that there are IC functions)
Example: loaddata_PBMC_with_ic
second_csv_name : str
name of the LoadData CSV for the second group of the plate (see example above)
"""
# load in LoadData CSV as pandas dataframe
loaddata_df = pd.read_csv(path_to_loadata)
# splitting dataframe by row index
df_1 = loaddata_df.iloc[:row_index_val,:]
df_2 = loaddata_df.iloc[row_index_val:,:]
# save new LoadData CSVs based on given name
df_1.to_csv(pathlib.Path(f"{output_dir}/{first_csv_name}.csv"), index=False)
df_2.to_csv(pathlib.Path(f"{output_dir}/{second_csv_name}.csv"), index=False)
print(f"{path_to_loadata.name} has been split into {first_csv_name}.csv and {second_csv_name}.csv!")
```
This allows for CellProfiler to start back where it left off.
| 1.0 | Power surge causing CellProfiler to stop mid-run - Due to a power surge/outage during the SH-SY5Y run and the computer not plugged in properly to the UPS (uninterrupted power supply), the SQLite file is incomplete.
Since CellProfiler does not have the ability to pick up a run where it left off, that means to avoid spending more computational power and time rerunning the same images, we can split the LoadData CSV for this cell type into two parts:
a. Part A, where these are the images that were run
b. Part B, where these are the images that still need to be processed
We know the image set that CellProfiler stopped at using the log file from the first run. Since the LoadData CSV has the same number of rows as the images sets, we can split the data frame by row index as seen below:
```python
def split_loaddata_csv_by_row(
path_to_loadata: pathlib.Path,
output_dir: pathlib.Path,
row_index_val: int,
first_csv_name: str,
second_csv_name: str,
):
"""
This function will split a LoadData CSV in half (two groups) based on columns into two different CSVs.
This is can used for when you have different cell types on the same plate.
Parameters
----------
path_to_loadata : pathlib.Path
path to the LoadData CSV with IC functions to be edited
output_dir : pathlib.Path
path to directory where new LoadData CSVs will be saved to
row_index_val : int
index value to separate
first_csv_name : str
name of the LoadData CSV for the first group of the plate (name should include loaddata and state
that there are IC functions)
Example: loaddata_PBMC_with_ic
second_csv_name : str
name of the LoadData CSV for the second group of the plate (see example above)
"""
# load in LoadData CSV as pandas dataframe
loaddata_df = pd.read_csv(path_to_loadata)
# splitting dataframe by row index
df_1 = loaddata_df.iloc[:row_index_val,:]
df_2 = loaddata_df.iloc[row_index_val:,:]
# save new LoadData CSVs based on given name
df_1.to_csv(pathlib.Path(f"{output_dir}/{first_csv_name}.csv"), index=False)
df_2.to_csv(pathlib.Path(f"{output_dir}/{second_csv_name}.csv"), index=False)
print(f"{path_to_loadata.name} has been split into {first_csv_name}.csv and {second_csv_name}.csv!")
```
This allows for CellProfiler to start back where it left off.
| non_main | power surge causing cellprofiler to stop mid run due to a power surge outage during the sh run and the computer not plugged in properly to the ups uninterrupted power supply the sqlite file is incomplete since cellprofiler does not have the ability to pick up a run where it left off that means to avoid spending more computational power and time rerunning the same images we can split the loaddata csv for this cell type into two parts a part a where these are the images that were run b part b where these are the images that still need to be processed we know the image set that cellprofiler stopped at using the log file from the first run since the loaddata csv has the same number of rows as the images sets we can split the data frame by row index as seen below python def split loaddata csv by row path to loadata pathlib path output dir pathlib path row index val int first csv name str second csv name str this function will split a loaddata csv in half two groups based on columns into two different csvs this is can used for when you have different cell types on the same plate parameters path to loadata pathlib path path to the loaddata csv with ic functions to be edited output dir pathlib path path to directory where new loaddata csvs will be saved to row index val int index value to separate first csv name str name of the loaddata csv for the first group of the plate name should include loaddata and state that there are ic functions example loaddata pbmc with ic second csv name str name of the loaddata csv for the second group of the plate see example above load in loaddata csv as pandas dataframe loaddata df pd read csv path to loadata splitting dataframe by row index df loaddata df iloc df loaddata df iloc save new loaddata csvs based on given name df to csv pathlib path f output dir first csv name csv index false df to csv pathlib path f output dir second csv name csv index false print f path to loadata name has been split into first csv name csv and second csv name csv this allows for cellprofiler to start back where it left off | 0 |
2,599 | 8,831,694,112 | IssuesEvent | 2019-01-04 00:13:29 | invertase/react-native-firebase | https://api.github.com/repos/invertase/react-native-firebase | closed | Storage PutFile returns downloadUrl as NULL | await-maintainer-feedback storage 👁investigate | ### Issue
Firebase uploading the file but
Running the below code, will upload the file successfully to firebase storage. HOWEVER it return return downloadUrl as NULL
```
firebase
.storage()
.ref(path)
.putFile(FILE_URI)
.on(
'state_changed',
snapshot => {},
err => {
console.log(err);
unsubscribe();
},
uploadedFile => {
url = uploadedFile.downloadUrl;
console.log(url);
unsubscribe();
}
);
```
### Environment
1. Application Target Platform:Both
2. Development Operating System: macOS
3. Build Tools: Xcode
4. `React Native` version: 0.55
5. `React Native Firebase` Version: 4.2
6. `Firebase` Module: Storage
7. Are you using `typescript`? No
| True | Storage PutFile returns downloadUrl as NULL - ### Issue
Firebase uploading the file but
Running the below code, will upload the file successfully to firebase storage. HOWEVER it return return downloadUrl as NULL
```
firebase
.storage()
.ref(path)
.putFile(FILE_URI)
.on(
'state_changed',
snapshot => {},
err => {
console.log(err);
unsubscribe();
},
uploadedFile => {
url = uploadedFile.downloadUrl;
console.log(url);
unsubscribe();
}
);
```
### Environment
1. Application Target Platform:Both
2. Development Operating System: macOS
3. Build Tools: Xcode
4. `React Native` version: 0.55
5. `React Native Firebase` Version: 4.2
6. `Firebase` Module: Storage
7. Are you using `typescript`? No
| main | storage putfile returns downloadurl as null issue firebase uploading the file but running the below code will upload the file successfully to firebase storage however it return return downloadurl as null firebase storage ref path putfile file uri on state changed snapshot err console log err unsubscribe uploadedfile url uploadedfile downloadurl console log url unsubscribe environment application target platform both development operating system macos build tools xcode react native version react native firebase version firebase module storage are you using typescript no | 1 |
348,298 | 10,440,664,231 | IssuesEvent | 2019-09-18 09:10:42 | moe-lk/sis-php | https://api.github.com/repos/moe-lk/sis-php | opened | School- School details | High Priority enhancement | Need to capture the following details of the school
Difficulty level
medium
Ethnicity
Grade span
Division
Electorate division
Divisional Secretariat division
GN Division
Reason for closing the school
| 1.0 | School- School details - Need to capture the following details of the school
Difficulty level
medium
Ethnicity
Grade span
Division
Electorate division
Divisional Secretariat division
GN Division
Reason for closing the school
| non_main | school school details need to capture the following details of the school difficulty level medium ethnicity grade span division electorate division divisional secretariat division gn division reason for closing the school | 0 |
2,164 | 7,529,633,760 | IssuesEvent | 2018-04-14 07:33:06 | fatlard1993/phaserload | https://api.github.com/repos/fatlard1993/phaserload | opened | add batch cleaning sprites | Feature maintainability useability | make a queue and flush every x ms by checking the concated coords against an object of matching keys | True | add batch cleaning sprites - make a queue and flush every x ms by checking the concated coords against an object of matching keys | main | add batch cleaning sprites make a queue and flush every x ms by checking the concated coords against an object of matching keys | 1 |
311,800 | 23,405,330,179 | IssuesEvent | 2022-08-12 12:16:16 | UnBArqDsw2022-1/2022_1_G5_SerFit | https://api.github.com/repos/UnBArqDsw2022-1/2022_1_G5_SerFit | closed | GOFs Comportamentais - Chain of responsibility | documentation | ### Contact Details (optional)
_No response_
### Summary
Documentar o GOF Comportamental Chain of Responsability
### Motivation
Gerar documento relativo ao GOF Comportamental Chain of Responsability
### Alternatives
_No response_
### Additional Context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | GOFs Comportamentais - Chain of responsibility - ### Contact Details (optional)
_No response_
### Summary
Documentar o GOF Comportamental Chain of Responsability
### Motivation
Gerar documento relativo ao GOF Comportamental Chain of Responsability
### Alternatives
_No response_
### Additional Context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | non_main | gofs comportamentais chain of responsibility contact details optional no response summary documentar o gof comportamental chain of responsability motivation gerar documento relativo ao gof comportamental chain of responsability alternatives no response additional context no response code of conduct i agree to follow this project s code of conduct | 0 |
237,573 | 18,164,453,421 | IssuesEvent | 2021-09-27 13:19:08 | trayio/mock-inspect | https://api.github.com/repos/trayio/mock-inspect | opened | Ignore tests folder when running mutation tests | documentation | I believe the `tests` folder doesn't need to be included in the report found here - https://trayio.github.io/mock-inspect/mutation/#tests | 1.0 | Ignore tests folder when running mutation tests - I believe the `tests` folder doesn't need to be included in the report found here - https://trayio.github.io/mock-inspect/mutation/#tests | non_main | ignore tests folder when running mutation tests i believe the tests folder doesn t need to be included in the report found here | 0 |
7,993 | 3,125,815,367 | IssuesEvent | 2015-09-08 04:09:04 | wblakecaldwell/profiler | https://api.github.com/repos/wblakecaldwell/profiler | closed | Update documentation to tell about 'Extra Service Info' | Documentation | Nowhere in the documentation is there any mention of 'Extra Service Info', or how to use it. Add it to the docs and an example service. | 1.0 | Update documentation to tell about 'Extra Service Info' - Nowhere in the documentation is there any mention of 'Extra Service Info', or how to use it. Add it to the docs and an example service. | non_main | update documentation to tell about extra service info nowhere in the documentation is there any mention of extra service info or how to use it add it to the docs and an example service | 0 |
2,799 | 10,022,406,626 | IssuesEvent | 2019-07-16 16:36:55 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Do not throw exceptions in finally blocks | Area: analyzer Area: maintainability feature | Exceptions should be thrown inside of `try` or `catch` blocks (or normal code blocks), but not inside of `finally` blocks.
Those `finally` blocks are intended to clean up stuff or to ensure that some code runs in all situations (except a few). | True | Do not throw exceptions in finally blocks - Exceptions should be thrown inside of `try` or `catch` blocks (or normal code blocks), but not inside of `finally` blocks.
Those `finally` blocks are intended to clean up stuff or to ensure that some code runs in all situations (except a few). | main | do not throw exceptions in finally blocks exceptions should be thrown inside of try or catch blocks or normal code blocks but not inside of finally blocks those finally blocks are intended to clean up stuff or to ensure that some code runs in all situations except a few | 1 |
4,280 | 21,526,569,982 | IssuesEvent | 2022-04-28 19:05:23 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | opened | [MAINTAIN] r-beclear: missing depends | maintain | <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
[dixonTest](https://cran.r-project.org/web/packages/dixonTest/index.html) for r-beclear
**Log of the bug**
<details>
```
put the output here
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
| True | [MAINTAIN] r-beclear: missing depends - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
[dixonTest](https://cran.r-project.org/web/packages/dixonTest/index.html) for r-beclear
**Log of the bug**
<details>
```
put the output here
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
| main | r beclear missing depends please report the error of one package in one issue use multi issues to report multi bugs thanks for r beclear log of the bug put the output here packages please complete the following information package name description add any other context about the problem here | 1 |
66,140 | 20,016,489,873 | IssuesEvent | 2022-02-01 12:37:00 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | DataTable: Dynamically rendered columns are not filterable/sortable | defect | **Describe the defect**
In our application we have cases where columns are added/removed dynamically by switching the "rendered"-attribute of each column. The column gets added but sorting and filtering is not working. This worked in previous versions of Primefaces. You can see the issue in the showcase "DataTable - Dynamic Columns" when adding a new column "representative". I have also added a reproducer which showcases this problem in smaller size.
**Reproducer**
[primefaces-test.zip](https://github.com/primefaces/primefaces/files/7977071/primefaces-test.zip)
https://www.primefaces.org/showcase/ui/data/datatable/columns.xhtml?jfwid=87ffd
**Environment:**
- PF Version: _11.0.0_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Start Reproducer
2. Test Sorting/Filtering functionality on the initial column -> works
3. Press Button "Show Column"
4. Test Sorting/Filtering functionality on the 2nd column -> does not work
**Expected behavior**
Same behaviour as in Primefaces 10 (working).
**Further Info**
I am quite sure that the underlying problem is similar to #8159, that being, that the sortByAsMap/filterByAsMap values are not updated and therefore the newly added column having nothing to go by when sorting/filtering. I have opened a PR #8358 which I believe contains the fix to the problem.
#8159 was closed with the suggestion to reset the values of sortByAsMap/filterByAsMap to null, which **would also work** for this reproducer and our application. However, in my opinion this is might not be the best solution, since every application using this feature has to be migrated in order to work. If we decide to use this approach, we should leave a hint in the migration guide and update the showcase for PF11, where it is currently used.
The fix I was proposing in my PR adds two lines which were previously there in PF10 and got lost in transition to PF11. Adding those back fixes the problem. With that solution it wouldn't be necessary to update the showcase as well. But I am open for other ideas or arguments against my approach.
**Excerpt from my comment under #8159**
After some time debugging I noticed that the functions isColumnSortable() / isColumnFilterable() from UITable are always called from within the encodeColumnHeader function of the DataTableRenderer. In PF10 it was here, where the sortByAsMap property was newly set.
For some reason though a line was dropped from PF10 to PF11 which resets the value. Please see below:
Primefaces 10
default boolean isColumnSortable(FacesContext context, UIColumn column) {
Map<String, SortMeta> sortBy = getSortByAsMap();
if (sortBy.containsKey(column.getColumnKey())) {
return true;
}
SortMeta s = SortMeta.of(context, getVar(), column);
if (s == null) {
return false;
}
// unlikely to happen, in case columns change between two ajax requests
sortBy.put(s.getColumnKey(), s);
setSortByAsMap(sortBy);
return true;
}
Primefaces 11
default boolean isColumnSortable(FacesContext context, UIColumn column) {
Map<String, SortMeta> sortBy = getSortByAsMap();
if (sortBy.containsKey(column.getColumnKey())) {
return true;
}
// lazy init - happens in cases where the column is initially not rendered
SortMeta s = SortMeta.of(context, getVar(), column);
if (s != null) {
sortBy.put(s.getColumnKey(), s);
}
// setSortByAsMap(sortBy); is missing here
return s != null;
}
Although isColumnFilterable looks different to isColumnSortable in PF10, in PF11 they almost look identical. Adding a setFilterByAsMap(filterBy) at the same position as above fixed the filtering for me as well.
| 1.0 | DataTable: Dynamically rendered columns are not filterable/sortable - **Describe the defect**
In our application we have cases where columns are added/removed dynamically by switching the "rendered"-attribute of each column. The column gets added but sorting and filtering is not working. This worked in previous versions of Primefaces. You can see the issue in the showcase "DataTable - Dynamic Columns" when adding a new column "representative". I have also added a reproducer which showcases this problem in smaller size.
**Reproducer**
[primefaces-test.zip](https://github.com/primefaces/primefaces/files/7977071/primefaces-test.zip)
https://www.primefaces.org/showcase/ui/data/datatable/columns.xhtml?jfwid=87ffd
**Environment:**
- PF Version: _11.0.0_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Start Reproducer
2. Test Sorting/Filtering functionality on the initial column -> works
3. Press Button "Show Column"
4. Test Sorting/Filtering functionality on the 2nd column -> does not work
**Expected behavior**
Same behaviour as in Primefaces 10 (working).
**Further Info**
I am quite sure that the underlying problem is similar to #8159, that being, that the sortByAsMap/filterByAsMap values are not updated and therefore the newly added column having nothing to go by when sorting/filtering. I have opened a PR #8358 which I believe contains the fix to the problem.
#8159 was closed with the suggestion to reset the values of sortByAsMap/filterByAsMap to null, which **would also work** for this reproducer and our application. However, in my opinion this is might not be the best solution, since every application using this feature has to be migrated in order to work. If we decide to use this approach, we should leave a hint in the migration guide and update the showcase for PF11, where it is currently used.
The fix I was proposing in my PR adds two lines which were previously there in PF10 and got lost in transition to PF11. Adding those back fixes the problem. With that solution it wouldn't be necessary to update the showcase as well. But I am open for other ideas or arguments against my approach.
**Excerpt from my comment under #8159**
After some time debugging I noticed that the functions isColumnSortable() / isColumnFilterable() from UITable are always called from within the encodeColumnHeader function of the DataTableRenderer. In PF10 it was here, where the sortByAsMap property was newly set.
For some reason though a line was dropped from PF10 to PF11 which resets the value. Please see below:
Primefaces 10
default boolean isColumnSortable(FacesContext context, UIColumn column) {
Map<String, SortMeta> sortBy = getSortByAsMap();
if (sortBy.containsKey(column.getColumnKey())) {
return true;
}
SortMeta s = SortMeta.of(context, getVar(), column);
if (s == null) {
return false;
}
// unlikely to happen, in case columns change between two ajax requests
sortBy.put(s.getColumnKey(), s);
setSortByAsMap(sortBy);
return true;
}
Primefaces 11
default boolean isColumnSortable(FacesContext context, UIColumn column) {
Map<String, SortMeta> sortBy = getSortByAsMap();
if (sortBy.containsKey(column.getColumnKey())) {
return true;
}
// lazy init - happens in cases where the column is initially not rendered
SortMeta s = SortMeta.of(context, getVar(), column);
if (s != null) {
sortBy.put(s.getColumnKey(), s);
}
// setSortByAsMap(sortBy); is missing here
return s != null;
}
Although isColumnFilterable looks different to isColumnSortable in PF10, in PF11 they almost look identical. Adding a setFilterByAsMap(filterBy) at the same position as above fixed the filtering for me as well.
| non_main | datatable dynamically rendered columns are not filterable sortable describe the defect in our application we have cases where columns are added removed dynamically by switching the rendered attribute of each column the column gets added but sorting and filtering is not working this worked in previous versions of primefaces you can see the issue in the showcase datatable dynamic columns when adding a new column representative i have also added a reproducer which showcases this problem in smaller size reproducer environment pf version affected browsers all to reproduce steps to reproduce the behavior start reproducer test sorting filtering functionality on the initial column works press button show column test sorting filtering functionality on the column does not work expected behavior same behaviour as in primefaces working further info i am quite sure that the underlying problem is similar to that being that the sortbyasmap filterbyasmap values are not updated and therefore the newly added column having nothing to go by when sorting filtering i have opened a pr which i believe contains the fix to the problem was closed with the suggestion to reset the values of sortbyasmap filterbyasmap to null which would also work for this reproducer and our application however in my opinion this is might not be the best solution since every application using this feature has to be migrated in order to work if we decide to use this approach we should leave a hint in the migration guide and update the showcase for where it is currently used the fix i was proposing in my pr adds two lines which were previously there in and got lost in transition to adding those back fixes the problem with that solution it wouldn t be necessary to update the showcase as well but i am open for other ideas or arguments against my approach excerpt from my comment under after some time debugging i noticed that the functions iscolumnsortable iscolumnfilterable from uitable are always called from within the encodecolumnheader function of the datatablerenderer in it was here where the sortbyasmap property was newly set for some reason though a line was dropped from to which resets the value please see below primefaces default boolean iscolumnsortable facescontext context uicolumn column map sortby getsortbyasmap if sortby containskey column getcolumnkey return true sortmeta s sortmeta of context getvar column if s null return false unlikely to happen in case columns change between two ajax requests sortby put s getcolumnkey s setsortbyasmap sortby return true primefaces default boolean iscolumnsortable facescontext context uicolumn column map sortby getsortbyasmap if sortby containskey column getcolumnkey return true lazy init happens in cases where the column is initially not rendered sortmeta s sortmeta of context getvar column if s null sortby put s getcolumnkey s setsortbyasmap sortby is missing here return s null although iscolumnfilterable looks different to iscolumnsortable in in they almost look identical adding a setfilterbyasmap filterby at the same position as above fixed the filtering for me as well | 0 |
57,226 | 14,040,469,454 | IssuesEvent | 2020-11-01 02:46:34 | YXL76/YXL-Blog | https://api.github.com/repos/YXL76/YXL-Blog | closed | Use Wsl to Build a Development Environment | YXL's Blog | Gitalk use-wsl-to-build-a-development-environment-gitalk | https://www.yxl76.net/en/post/use-wsl-to-build-a-development-environment/
Summarize my experience in configuring WSL | 1.0 | Use Wsl to Build a Development Environment | YXL's Blog - https://www.yxl76.net/en/post/use-wsl-to-build-a-development-environment/
Summarize my experience in configuring WSL | non_main | use wsl to build a development environment yxl s blog summarize my experience in configuring wsl | 0 |
3,051 | 11,412,242,480 | IssuesEvent | 2020-02-01 11:50:46 | precice/precice | https://api.github.com/repos/precice/precice | opened | Support Multiple SolverInterfaces | breaking change enhancement maintainability usability | # Problem
Sometimes it may be required to create multiple instances of the `SolverInterface`.
This is currently not fully supported and will lead to weird behaviour in some instances.
Example for this issue #378
# Solution
1. Remove the majority of static state from the preCICE library. Static state was occasionally introduced to simplify the design of some features, but it is not required in the vast majority of cases.
Main issue #385
2. Prevent simultaniuous instance of the `SolverInterface`s describing the same Participant (same Name)
3. Port bindings to allow multiple instantiations rather than a single one. | True | Support Multiple SolverInterfaces - # Problem
Sometimes it may be required to create multiple instances of the `SolverInterface`.
This is currently not fully supported and will lead to weird behaviour in some instances.
Example for this issue #378
# Solution
1. Remove the majority of static state from the preCICE library. Static state was occasionally introduced to simplify the design of some features, but it is not required in the vast majority of cases.
Main issue #385
2. Prevent simultaniuous instance of the `SolverInterface`s describing the same Participant (same Name)
3. Port bindings to allow multiple instantiations rather than a single one. | main | support multiple solverinterfaces problem sometimes it may be required to create multiple instances of the solverinterface this is currently not fully supported and will lead to weird behaviour in some instances example for this issue solution remove the majority of static state from the precice library static state was occasionally introduced to simplify the design of some features but it is not required in the vast majority of cases main issue prevent simultaniuous instance of the solverinterface s describing the same participant same name port bindings to allow multiple instantiations rather than a single one | 1 |
1,950 | 6,654,831,267 | IssuesEvent | 2017-09-29 14:15:43 | ddeboer/imap | https://api.github.com/repos/ddeboer/imap | closed | $message->getAttachments() doesn't recognize some attachments | awaiting maintainer response bug | ```
Delivered-To: xxx@yyy.cz
Received: by 10.202.195.68 with SMTP id t65csp871342oif;
Sun, 15 Feb 2015 02:21:54 -0800 (PST)
X-Received: by 10.236.209.35 with SMTP id r23mr13006152yho.26.1423995713978;
Sun, 15 Feb 2015 02:21:53 -0800 (PST)
Return-Path: <xxx+bncBCCLZVVTVAARBQHGQGTQKGQEWCQEDSQ@yyy.cz>
Received: from mail-yk0-x245.google.com (mail-yk0-x245.google.com. [2607:f8b0:4002:c07::245])
by mx.google.com with ESMTPS id h70si1767496yhq.9.2015.02.15.02.21.52
for <xxx@yyy.cz>
(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
Sun, 15 Feb 2015 02:21:53 -0800 (PST)
Received-SPF: pass (google.com: domain of xxx+bncBCCLZVVTVAARBQHGQGTQKGQEWCQEDSQ@yyy.cz designates 2607:f8b0:4002:c07::245 as permitted sender) client-ip=2607:f8b0:4002:c07::245;
Authentication-Results: mx.google.com;
spf=pass (google.com: domain of xxx+bncBCCLZVVTVAARBQHGQGTQKGQEWCQEDSQ@yyy.cz designates 2607:f8b0:4002:c07::245 as permitted sender) smtp.mail=xxx+bncBCCLZVVTVAARBQHGQGTQKGQEWCQEDSQ@yyy.cz;
dkim=pass header.i=@yyy.cz
Received: by mail-yk0-f197.google.com with SMTP id 19sf76654028ykq.0
for <xxx@yyy.cz>; Sun, 15 Feb 2015 02:21:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=yyy.cz; s=google;
h=sender:mime-version:message-id:date:subject:from:to:content-type
:content-disposition:content-transfer-encoding:x-original-sender
:x-original-authentication-results:precedence:mailing-list:list-id
:list-help:reply-to;
bh=CplHfCqscVv63ZCvirQ+kOAnaGy+fQ75nMYfE7lYq9k=;
b=DPypTwmlwUGYaPk2qqgJqjKDY8Ep3k9qefc6KQQC86Eah/OGM/Garig6jT13iLwyET
FtYsvRn2T8w64BnJyOPecm78M8L2huzBewNvoelkeWJ/iOy7Q6aBvs6QRfSlofEj5h6J
fY+smO5ygB3vV83l940XlOny+DBIqVpOEbMHY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=sender:x-gm-message-state:mime-version:message-id:date:subject:from
:to:content-type:content-disposition:content-transfer-encoding
:x-original-sender:x-original-authentication-results:precedence
:mailing-list:list-id:list-help:reply-to;
bh=CplHfCqscVv63ZCvirQ+kOAnaGy+fQ75nMYfE7lYq9k=;
b=PsyED0jEeSUp2AIAVSDPml90JG3aUJohwjIW25jlclqk1URF/aE8cqXQ18ZRXb+aN0
NrQMYWpzaTH7Lhm8P4zhx4RwSpg/5ARF3DcBn1sq3pvrbyq+EPb8VVCRaNzFfBDgvyhy
PzjAOcj+ePn5JHgF72Vcwe1otXzrNjdwmOlac8InvAXB7o3m9BJSJDRXDr1njey6nTrs
rxkRG/m00hudFhUfS8mIOcBDzRQlxhv2Qyieqx9MOD5GWx3yahNIdOlxGMp2aKygIjdi
Ido74o8k7Toxg9+i8ezRe0u2PPPXKvcvS6C3jfDFur3itznWrigANhKx72579XfwwUG5
hBGw==
Sender: xxx@yyy.cz
X-Gm-Message-State: ALoCoQk7KfRa8eCPbeV/RdlsyKq6gQuK7klaYSAFQPzgRNVhuflMuZp/M0VpRUca9zwJDjYrEv/u
X-Received: by 10.236.36.39 with SMTP id v27mr16650675yha.24.1423995712566;
Sun, 15 Feb 2015 02:21:52 -0800 (PST)
X-BeenThere: xxx@yyy.cz
Received: by 10.107.170.129 with SMTP id g1ls1295636ioj.69.gmail; Sun, 15 Feb
2015 02:21:52 -0800 (PST)
X-Received: by 10.107.168.207 with SMTP id e76mr22648201ioj.60.1423995712409;
Sun, 15 Feb 2015 02:21:52 -0800 (PST)
Received: from mail-ie0-f201.google.com (mail-ie0-f201.google.com. [209.85.223.201])
by mx.google.com with ESMTPS id s10si8220288igg.4.2015.02.15.02.21.52
for <xxx@yyy.cz>
(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
Sun, 15 Feb 2015 02:21:52 -0800 (PST)
Received-SPF: pass (google.com: domain of noreply-dmarc-support@google.com designates 209.85.223.201 as permitted sender) client-ip=209.85.223.201;
Received: by iecrl12 with SMTP id rl12so7777265iec.0
for <xxx@yyy.cz>; Sun, 15 Feb 2015 02:21:52 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.50.161 with SMTP id d1mr16824193obo.28.1423995711904;
Sun, 15 Feb 2015 02:21:51 -0800 (PST)
Message-ID: <2244696771454641389@google.com>
Date: Sun, 15 Feb 2015 10:21:51 +0000
Subject: Report domain: yyy.cz Submitter: google.com Report-ID: 2244696771454641389
From: noreply-dmarc-support via xxx <xxx@yyy.cz>
To: xxx@yyy.cz
Content-Type: application/zip;
name="google.com!yyy.cz!1423872000!1423958399.zip"
Content-Disposition: attachment;
filename="google.com!yyy.cz!1423872000!1423958399.zip"
Content-Transfer-Encoding: base64
X-Original-Sender: noreply-dmarc-support@google.com
X-Original-Authentication-Results: mx.google.com; spf=pass (google.com:
domain of noreply-dmarc-support@google.com designates 209.85.223.201 as
permitted sender) smtp.mail=noreply-dmarc-support@google.com; dkim=pass
header.i=@google.com; dmarc=pass (p=REJECT dis=NONE) header.from=google.com
Precedence: list
Mailing-list: list xxx@yyy.cz; contact xxx+owners@yyy.cz
List-ID: <xxx.yyy.cz>
X-Google-Group-Id: 98899750969
List-Help: <http://support.google.com/a/yyy.cz/bin/topic.py&topic=25838>, <mailto:xxx+help@yyy.cz>
X-Original-From: noreply-dmarc-support@google.com
Reply-To: noreply-dmarc-support@google.com
UEsDBAoAAAAIABRPT0bdJB+DSwIAALgKAAAuAAAAZ29vZ2xlLmNvbSFzdW5mb3guY3ohMTQyMzg3
MjAwMCExNDIzOTU4Mzk5LnhtbO1WwY6bMBC971dEuQcDIQSQQ3rqF7RnZIwh7oJt2WY32a+viQ2h
u9moqnarqOop8Gbmjd/4OQbuj127eCJSUc52y8DzlwvCMK8oa3bL79++rpLlYp8/wJqQqkT4MX9Y
LKAkgktddESjCmk0YAblsikY6kjecN60xMO8g2ACbQ7pEG1zxg1De1pVHZJ4pXox0H2Zl9k8V3PU
EhWYM42wLiireX7QWmQAuErvUgkQKCkDiKlnIj1x2tunXRjF8SbxDfFbMtvFaaJVHoZRFKfxdhtE
myiOgnWSQnAJ23SjmxQSscYpM1BJGsryIArXyTb0fdPMImOcsOocTTfJOjWUw7slA7+yTd3mA4aC
txSfCtGXLVUHMi2Em1GxXPVGytHDL4bMIjaMqkfa5RIC++BAJeozNvxaSOSS/CBYQyAcoi6QGjGB
dR4MyoaH80qvrcrMEnM5LlDy52kEivcSk4KKPIz9bVYnpZ9Fvr/OsB9kWbgOTa8pZSzCvGemLQT2
YYRdZ/KE2t6MrxoDw0yoElxRbTxtvMaImckMmeUNIxFIKZMwTceJr11gGtFM7aueZr9GjZBWhGla
U3OiprIDQRWRRS15N9+nOex43lRD1OtDIYnqW30hfLXY2xZw7h4YnCT3Mqma08GZ3g+gvhgMvFYy
JI82+R3HpL4XbDdesIm84SB/tE9Gr99wSm3+k646xQbu0Sl/uptW0Sfu5tXzH6b3dP7vd1f/+vl/
KU83eRnpzbX6uY5JzMeJZ25PLwji920S/r8m/tVrAoLLR+hPUEsBAgoACgAAAAgAFE9PRt0kH4NL
AgAAuAoAAC4AAAAAAAAAAAAAAAAAAAAAAGdvb2dsZS5jb20hc3VuZm94LmN6ITE0MjM4NzIwMDAh
MTQyMzk1ODM5OS54bWxQSwUGAAAAAAEAAQBcAAAAlwIAAAAA
```
| True | $message->getAttachments() doesn't recognize some attachments - ```
Delivered-To: xxx@yyy.cz
Received: by 10.202.195.68 with SMTP id t65csp871342oif;
Sun, 15 Feb 2015 02:21:54 -0800 (PST)
X-Received: by 10.236.209.35 with SMTP id r23mr13006152yho.26.1423995713978;
Sun, 15 Feb 2015 02:21:53 -0800 (PST)
Return-Path: <xxx+bncBCCLZVVTVAARBQHGQGTQKGQEWCQEDSQ@yyy.cz>
Received: from mail-yk0-x245.google.com (mail-yk0-x245.google.com. [2607:f8b0:4002:c07::245])
by mx.google.com with ESMTPS id h70si1767496yhq.9.2015.02.15.02.21.52
for <xxx@yyy.cz>
(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
Sun, 15 Feb 2015 02:21:53 -0800 (PST)
Received-SPF: pass (google.com: domain of xxx+bncBCCLZVVTVAARBQHGQGTQKGQEWCQEDSQ@yyy.cz designates 2607:f8b0:4002:c07::245 as permitted sender) client-ip=2607:f8b0:4002:c07::245;
Authentication-Results: mx.google.com;
spf=pass (google.com: domain of xxx+bncBCCLZVVTVAARBQHGQGTQKGQEWCQEDSQ@yyy.cz designates 2607:f8b0:4002:c07::245 as permitted sender) smtp.mail=xxx+bncBCCLZVVTVAARBQHGQGTQKGQEWCQEDSQ@yyy.cz;
dkim=pass header.i=@yyy.cz
Received: by mail-yk0-f197.google.com with SMTP id 19sf76654028ykq.0
for <xxx@yyy.cz>; Sun, 15 Feb 2015 02:21:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=yyy.cz; s=google;
h=sender:mime-version:message-id:date:subject:from:to:content-type
:content-disposition:content-transfer-encoding:x-original-sender
:x-original-authentication-results:precedence:mailing-list:list-id
:list-help:reply-to;
bh=CplHfCqscVv63ZCvirQ+kOAnaGy+fQ75nMYfE7lYq9k=;
b=DPypTwmlwUGYaPk2qqgJqjKDY8Ep3k9qefc6KQQC86Eah/OGM/Garig6jT13iLwyET
FtYsvRn2T8w64BnJyOPecm78M8L2huzBewNvoelkeWJ/iOy7Q6aBvs6QRfSlofEj5h6J
fY+smO5ygB3vV83l940XlOny+DBIqVpOEbMHY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=sender:x-gm-message-state:mime-version:message-id:date:subject:from
:to:content-type:content-disposition:content-transfer-encoding
:x-original-sender:x-original-authentication-results:precedence
:mailing-list:list-id:list-help:reply-to;
bh=CplHfCqscVv63ZCvirQ+kOAnaGy+fQ75nMYfE7lYq9k=;
b=PsyED0jEeSUp2AIAVSDPml90JG3aUJohwjIW25jlclqk1URF/aE8cqXQ18ZRXb+aN0
NrQMYWpzaTH7Lhm8P4zhx4RwSpg/5ARF3DcBn1sq3pvrbyq+EPb8VVCRaNzFfBDgvyhy
PzjAOcj+ePn5JHgF72Vcwe1otXzrNjdwmOlac8InvAXB7o3m9BJSJDRXDr1njey6nTrs
rxkRG/m00hudFhUfS8mIOcBDzRQlxhv2Qyieqx9MOD5GWx3yahNIdOlxGMp2aKygIjdi
Ido74o8k7Toxg9+i8ezRe0u2PPPXKvcvS6C3jfDFur3itznWrigANhKx72579XfwwUG5
hBGw==
Sender: xxx@yyy.cz
X-Gm-Message-State: ALoCoQk7KfRa8eCPbeV/RdlsyKq6gQuK7klaYSAFQPzgRNVhuflMuZp/M0VpRUca9zwJDjYrEv/u
X-Received: by 10.236.36.39 with SMTP id v27mr16650675yha.24.1423995712566;
Sun, 15 Feb 2015 02:21:52 -0800 (PST)
X-BeenThere: xxx@yyy.cz
Received: by 10.107.170.129 with SMTP id g1ls1295636ioj.69.gmail; Sun, 15 Feb
2015 02:21:52 -0800 (PST)
X-Received: by 10.107.168.207 with SMTP id e76mr22648201ioj.60.1423995712409;
Sun, 15 Feb 2015 02:21:52 -0800 (PST)
Received: from mail-ie0-f201.google.com (mail-ie0-f201.google.com. [209.85.223.201])
by mx.google.com with ESMTPS id s10si8220288igg.4.2015.02.15.02.21.52
for <xxx@yyy.cz>
(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
Sun, 15 Feb 2015 02:21:52 -0800 (PST)
Received-SPF: pass (google.com: domain of noreply-dmarc-support@google.com designates 209.85.223.201 as permitted sender) client-ip=209.85.223.201;
Received: by iecrl12 with SMTP id rl12so7777265iec.0
for <xxx@yyy.cz>; Sun, 15 Feb 2015 02:21:52 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.50.161 with SMTP id d1mr16824193obo.28.1423995711904;
Sun, 15 Feb 2015 02:21:51 -0800 (PST)
Message-ID: <2244696771454641389@google.com>
Date: Sun, 15 Feb 2015 10:21:51 +0000
Subject: Report domain: yyy.cz Submitter: google.com Report-ID: 2244696771454641389
From: noreply-dmarc-support via xxx <xxx@yyy.cz>
To: xxx@yyy.cz
Content-Type: application/zip;
name="google.com!yyy.cz!1423872000!1423958399.zip"
Content-Disposition: attachment;
filename="google.com!yyy.cz!1423872000!1423958399.zip"
Content-Transfer-Encoding: base64
X-Original-Sender: noreply-dmarc-support@google.com
X-Original-Authentication-Results: mx.google.com; spf=pass (google.com:
domain of noreply-dmarc-support@google.com designates 209.85.223.201 as
permitted sender) smtp.mail=noreply-dmarc-support@google.com; dkim=pass
header.i=@google.com; dmarc=pass (p=REJECT dis=NONE) header.from=google.com
Precedence: list
Mailing-list: list xxx@yyy.cz; contact xxx+owners@yyy.cz
List-ID: <xxx.yyy.cz>
X-Google-Group-Id: 98899750969
List-Help: <http://support.google.com/a/yyy.cz/bin/topic.py&topic=25838>, <mailto:xxx+help@yyy.cz>
X-Original-From: noreply-dmarc-support@google.com
Reply-To: noreply-dmarc-support@google.com
UEsDBAoAAAAIABRPT0bdJB+DSwIAALgKAAAuAAAAZ29vZ2xlLmNvbSFzdW5mb3guY3ohMTQyMzg3
MjAwMCExNDIzOTU4Mzk5LnhtbO1WwY6bMBC971dEuQcDIQSQQ3rqF7RnZIwh7oJt2WY32a+viQ2h
u9moqnarqOop8Gbmjd/4OQbuj127eCJSUc52y8DzlwvCMK8oa3bL79++rpLlYp8/wJqQqkT4MX9Y
LKAkgktddESjCmk0YAblsikY6kjecN60xMO8g2ACbQ7pEG1zxg1De1pVHZJ4pXox0H2Zl9k8V3PU
EhWYM42wLiireX7QWmQAuErvUgkQKCkDiKlnIj1x2tunXRjF8SbxDfFbMtvFaaJVHoZRFKfxdhtE
myiOgnWSQnAJ23SjmxQSscYpM1BJGsryIArXyTb0fdPMImOcsOocTTfJOjWUw7slA7+yTd3mA4aC
txSfCtGXLVUHMi2Em1GxXPVGytHDL4bMIjaMqkfa5RIC++BAJeozNvxaSOSS/CBYQyAcoi6QGjGB
dR4MyoaH80qvrcrMEnM5LlDy52kEivcSk4KKPIz9bVYnpZ9Fvr/OsB9kWbgOTa8pZSzCvGemLQT2
YYRdZ/KE2t6MrxoDw0yoElxRbTxtvMaImckMmeUNIxFIKZMwTceJr11gGtFM7aueZr9GjZBWhGla
U3OiprIDQRWRRS15N9+nOex43lRD1OtDIYnqW30hfLXY2xZw7h4YnCT3Mqma08GZ3g+gvhgMvFYy
JI82+R3HpL4XbDdesIm84SB/tE9Gr99wSm3+k646xQbu0Sl/uptW0Sfu5tXzH6b3dP7vd1f/+vl/
KU83eRnpzbX6uY5JzMeJZ25PLwji920S/r8m/tVrAoLLR+hPUEsBAgoACgAAAAgAFE9PRt0kH4NL
AgAAuAoAAC4AAAAAAAAAAAAAAAAAAAAAAGdvb2dsZS5jb20hc3VuZm94LmN6ITE0MjM4NzIwMDAh
MTQyMzk1ODM5OS54bWxQSwUGAAAAAAEAAQBcAAAAlwIAAAAA
```
| main | message getattachments doesn t recognize some attachments delivered to xxx yyy cz received by with smtp id sun feb pst x received by with smtp id sun feb pst return path received from mail google com mail google com by mx google com with esmtps id for version cipher ecdhe rsa gcm bits sun feb pst received spf pass google com domain of xxx bncbcclzvvtvaarbqhgqgtqkgqewcqedsq yyy cz designates as permitted sender client ip authentication results mx google com spf pass google com domain of xxx bncbcclzvvtvaarbqhgqgtqkgqewcqedsq yyy cz designates as permitted sender smtp mail xxx bncbcclzvvtvaarbqhgqgtqkgqewcqedsq yyy cz dkim pass header i yyy cz received by mail google com with smtp id for sun feb pst dkim signature v a rsa c relaxed relaxed d yyy cz s google h sender mime version message id date subject from to content type content disposition content transfer encoding x original sender x original authentication results precedence mailing list list id list help reply to bh koanagy b ogm fy dbiqvpoebmhy x google dkim signature v a rsa c relaxed relaxed d net s h sender x gm message state mime version message id date subject from to content type content disposition content transfer encoding x original sender x original authentication results precedence mailing list list id list help reply to bh koanagy b pzjaocj rxkrg hbgw sender xxx yyy cz x gm message state u x received by with smtp id sun feb pst x beenthere xxx yyy cz received by with smtp id gmail sun feb pst x received by with smtp id sun feb pst received from mail google com mail google com by mx google com with esmtps id for version cipher ecdhe rsa gcm bits sun feb pst received spf pass google com domain of noreply dmarc support google com designates as permitted sender client ip received by with smtp id for sun feb pst mime version x received by with smtp id sun feb pst message id date sun feb subject report domain yyy cz submitter google com report id from noreply dmarc support via xxx to xxx yyy cz content type application zip name google com yyy cz zip content disposition attachment filename google com yyy cz zip content transfer encoding x original sender noreply dmarc support google com x original authentication results mx google com spf pass google com domain of noreply dmarc support google com designates as permitted sender smtp mail noreply dmarc support google com dkim pass header i google com dmarc pass p reject dis none header from google com precedence list mailing list list xxx yyy cz contact xxx owners yyy cz list id x google group id list help x original from noreply dmarc support google com reply to noreply dmarc support google com bajeoznvxasoss yyrdz gvhgmvfyy vl tvraollr | 1 |
127,443 | 18,010,474,737 | IssuesEvent | 2021-09-16 08:01:13 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | opened | CVE-2016-3689 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2016-3689 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/misc/ims-pcu.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/misc/ims-pcu.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ims_pcu_parse_cdc_data function in drivers/input/misc/ims-pcu.c in the Linux kernel before 4.5.1 allows physically proximate attackers to cause a denial of service (system crash) via a USB device without both a master and a slave interface.
<p>Publish Date: 2016-05-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3689>CVE-2016-3689</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-3689">https://nvd.nist.gov/vuln/detail/CVE-2016-3689</a></p>
<p>Release Date: 2016-05-02</p>
<p>Fix Resolution: 4.5.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-3689 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2016-3689 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/misc/ims-pcu.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/input/misc/ims-pcu.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ims_pcu_parse_cdc_data function in drivers/input/misc/ims-pcu.c in the Linux kernel before 4.5.1 allows physically proximate attackers to cause a denial of service (system crash) via a USB device without both a master and a slave interface.
<p>Publish Date: 2016-05-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3689>CVE-2016-3689</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-3689">https://nvd.nist.gov/vuln/detail/CVE-2016-3689</a></p>
<p>Release Date: 2016-05-02</p>
<p>Fix Resolution: 4.5.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href vulnerable source files drivers input misc ims pcu c drivers input misc ims pcu c vulnerability details the ims pcu parse cdc data function in drivers input misc ims pcu c in the linux kernel before allows physically proximate attackers to cause a denial of service system crash via a usb device without both a master and a slave interface publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
2,516 | 8,655,460,179 | IssuesEvent | 2018-11-27 16:00:33 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | unable to export multiple screenshots for the same game | unmaintained | I have a North American PS Vita purchased about 1 month ago and fully patched. I'm using the macOS no ffmpeg (0.4.1) version of QCMA. (I tried installing the Sierra version but it complained about a missing library.)
I am able to wirelessly connect my Vita to QCMA running on my Mac. Fantastic! I'm also able to start a screenshot export (copy to Mac). But, when I select multiple screenshots and try to export, it warns that the file already exists and offers to skip exporting or continue. Regardless of which option I choose, I end up with only one screenshot on my Mac.
If it matters any, the screenshots came from Shiren The Wanderer. On the PS Vita, the screenshots all appear to have the same filename "Shiren The Wanderer" -- or at least that's the only identifier given in the UI. I notice that screenshots of other apps have names like a date/time stamp, and for these apps, I believe I can export in bulk without an issue.
I can work around the problem by pausing at the PS Vita "continue or skip" screen, then on my Mac renaming the file, then continuing the PS Vita for one more file, renaming on Mac, etc.. Needless to say this doesn't scale for the hundreds of screenshots that I'd like to be able to export.
Comparing notes with other QCMA users online, it seems that other users (on Linux) do not have this problem. They still see all the screenshots as "Shiren The Wanderer" on the PS Vita, but when they export, they all have unique date/timestamp filenames on Linux.
I looked for a QCMA and PS Vita setting to change how screenshot filenames are handled but couldn't find anything.
Is this by chance fixed in the yes-ffmpeg version for macOS, and if so, is there an article somewhere about how to fix the missing library problem with that version of QCMA?
Thank you in advance.
-Rorke.
| True | unable to export multiple screenshots for the same game - I have a North American PS Vita purchased about 1 month ago and fully patched. I'm using the macOS no ffmpeg (0.4.1) version of QCMA. (I tried installing the Sierra version but it complained about a missing library.)
I am able to wirelessly connect my Vita to QCMA running on my Mac. Fantastic! I'm also able to start a screenshot export (copy to Mac). But, when I select multiple screenshots and try to export, it warns that the file already exists and offers to skip exporting or continue. Regardless of which option I choose, I end up with only one screenshot on my Mac.
If it matters any, the screenshots came from Shiren The Wanderer. On the PS Vita, the screenshots all appear to have the same filename "Shiren The Wanderer" -- or at least that's the only identifier given in the UI. I notice that screenshots of other apps have names like a date/time stamp, and for these apps, I believe I can export in bulk without an issue.
I can work around the problem by pausing at the PS Vita "continue or skip" screen, then on my Mac renaming the file, then continuing the PS Vita for one more file, renaming on Mac, etc.. Needless to say this doesn't scale for the hundreds of screenshots that I'd like to be able to export.
Comparing notes with other QCMA users online, it seems that other users (on Linux) do not have this problem. They still see all the screenshots as "Shiren The Wanderer" on the PS Vita, but when they export, they all have unique date/timestamp filenames on Linux.
I looked for a QCMA and PS Vita setting to change how screenshot filenames are handled but couldn't find anything.
Is this by chance fixed in the yes-ffmpeg version for macOS, and if so, is there an article somewhere about how to fix the missing library problem with that version of QCMA?
Thank you in advance.
-Rorke.
| main | unable to export multiple screenshots for the same game i have a north american ps vita purchased about month ago and fully patched i m using the macos no ffmpeg version of qcma i tried installing the sierra version but it complained about a missing library i am able to wirelessly connect my vita to qcma running on my mac fantastic i m also able to start a screenshot export copy to mac but when i select multiple screenshots and try to export it warns that the file already exists and offers to skip exporting or continue regardless of which option i choose i end up with only one screenshot on my mac if it matters any the screenshots came from shiren the wanderer on the ps vita the screenshots all appear to have the same filename shiren the wanderer or at least that s the only identifier given in the ui i notice that screenshots of other apps have names like a date time stamp and for these apps i believe i can export in bulk without an issue i can work around the problem by pausing at the ps vita continue or skip screen then on my mac renaming the file then continuing the ps vita for one more file renaming on mac etc needless to say this doesn t scale for the hundreds of screenshots that i d like to be able to export comparing notes with other qcma users online it seems that other users on linux do not have this problem they still see all the screenshots as shiren the wanderer on the ps vita but when they export they all have unique date timestamp filenames on linux i looked for a qcma and ps vita setting to change how screenshot filenames are handled but couldn t find anything is this by chance fixed in the yes ffmpeg version for macos and if so is there an article somewhere about how to fix the missing library problem with that version of qcma thank you in advance rorke | 1 |
4,663 | 24,099,586,171 | IssuesEvent | 2022-09-19 22:26:24 | DynamoRIO/dynamorio | https://api.github.com/repos/DynamoRIO/dynamorio | closed | gdb has poor support for DR's new takeover signal SIGSTKFLT | OpSys-Linux Maintainability | For #5458 we switched from SIGUSR2 to SIGSTKFLT. However, gdb does not handle this signal well at all:
```
Thread 2 "tool.drcacheoff" received signal ?, Unknown signal.
(gdb) p $_siginfo
$1 = {
si_signo = 16,
(gdb) c
Continuing.
warning: Signal ? does not exist on this system.
(gdb) handle 16 nostop noprint pass
Only signals 1-15 are valid as numeric signals.
Use "info signals" for a list of symbolic signals.
(gdb) handle SIGSTKFLT nostop noprint pass
Unrecognized or ambiguous flag word: "SIGSTKFLT".
```
I tried `handle all nostop noprint pass` and it didn't apply to signal 16!
So we have to hit `c` a hundred times to make progress which is not good.
I saw this in gdb version 9 and 10.
I propose we switch to SIGPWR which gdb does support. I tested it with gdb and with QEMU and QEMU does not freak out and abort like it does with SIGFPE.
Other candidates are SIGXCPU and SIGXFSZ but the kernel will produce those: SIGPWR seems more obscure.
| True | gdb has poor support for DR's new takeover signal SIGSTKFLT - For #5458 we switched from SIGUSR2 to SIGSTKFLT. However, gdb does not handle this signal well at all:
```
Thread 2 "tool.drcacheoff" received signal ?, Unknown signal.
(gdb) p $_siginfo
$1 = {
si_signo = 16,
(gdb) c
Continuing.
warning: Signal ? does not exist on this system.
(gdb) handle 16 nostop noprint pass
Only signals 1-15 are valid as numeric signals.
Use "info signals" for a list of symbolic signals.
(gdb) handle SIGSTKFLT nostop noprint pass
Unrecognized or ambiguous flag word: "SIGSTKFLT".
```
I tried `handle all nostop noprint pass` and it didn't apply to signal 16!
So we have to hit `c` a hundred times to make progress which is not good.
I saw this in gdb version 9 and 10.
I propose we switch to SIGPWR which gdb does support. I tested it with gdb and with QEMU and QEMU does not freak out and abort like it does with SIGFPE.
Other candidates are SIGXCPU and SIGXFSZ but the kernel will produce those: SIGPWR seems more obscure.
| main | gdb has poor support for dr s new takeover signal sigstkflt for we switched from to sigstkflt however gdb does not handle this signal well at all thread tool drcacheoff received signal unknown signal gdb p siginfo si signo gdb c continuing warning signal does not exist on this system gdb handle nostop noprint pass only signals are valid as numeric signals use info signals for a list of symbolic signals gdb handle sigstkflt nostop noprint pass unrecognized or ambiguous flag word sigstkflt i tried handle all nostop noprint pass and it didn t apply to signal so we have to hit c a hundred times to make progress which is not good i saw this in gdb version and i propose we switch to sigpwr which gdb does support i tested it with gdb and with qemu and qemu does not freak out and abort like it does with sigfpe other candidates are sigxcpu and sigxfsz but the kernel will produce those sigpwr seems more obscure | 1 |
174,766 | 13,519,998,802 | IssuesEvent | 2020-09-15 03:30:56 | microsoft/msquic | https://api.github.com/repos/microsoft/msquic | opened | Test TLS HelloRetryRequest | Area: Testing | We need an automated test case for the TLS HelloRetryRequest message. | 1.0 | Test TLS HelloRetryRequest - We need an automated test case for the TLS HelloRetryRequest message. | non_main | test tls helloretryrequest we need an automated test case for the tls helloretryrequest message | 0 |
339,105 | 24,610,693,163 | IssuesEvent | 2022-10-14 21:04:06 | atorus-research/Tplyr | https://api.github.com/repos/atorus-research/Tplyr | closed | Documentation doesn't state clearly enough acceptable variables for f_str | documentation | While we have a good bit of documentation available for using `f_str()` and the vignettes for each layer type discuss appropriate summaries, it would help to have a single, consolidated location that outlines layer types and acceptable variables provided to `f_str()` within Tplyr's context.
| 1.0 | Documentation doesn't state clearly enough acceptable variables for f_str - While we have a good bit of documentation available for using `f_str()` and the vignettes for each layer type discuss appropriate summaries, it would help to have a single, consolidated location that outlines layer types and acceptable variables provided to `f_str()` within Tplyr's context.
| non_main | documentation doesn t state clearly enough acceptable variables for f str while we have a good bit of documentation available for using f str and the vignettes for each layer type discuss appropriate summaries it would help to have a single consolidated location that outlines layer types and acceptable variables provided to f str within tplyr s context | 0 |
4,997 | 25,712,093,066 | IssuesEvent | 2022-12-07 07:35:05 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | SAM unable to find python.exe after SAM version upgrade | type/question stage/needs-investigation maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
After an upgrade from SAM version (about) v1.40.0 to v1.62.0 I receive the error message below when I try to use AWS SAM. I've been using SAM successfully for a couple of years.
Fatal error in launcher: Unable to create process using '"c:\users\containeradministrator\appdata\local\temp\tmpx0xr0ddm\runtime\python.exe" "C:\Program Files\Amazon\AWSSAMCLI\runtime\Scripts\sam.exe" ': The system cannot find the file specified.
I have tried:
- uninstalling AWS CLI v2 and AWS SAM, rebooting, reinstalling both, rebooting again.
- upgrading docker desktop, uninstalling docker desktop, reinstalling docker desktop
- Installing AWS SAM version 1.40.0, 1.46.0, 1.55,0, 1.61.0
I've never had a c:\users\containeradministrator folder, and there's no reference to it anywhere I can see.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
I'm not sure this is reproducable. On my own PC Windows 10-64 22H2 it works fine when I do the upgrade.
- Upgrade to the latest version of SAM
- Restart PC
- Run AWS SAM: "sam"
I expect this will work on most computers and that it's something odd going on.
### Observed result:
<!-- Please provide command output with `--debug` flag set. -->
sam --debug
Fatal error in launcher: Unable to create process using '"c:\users\containeradministrator\appdata\local\temp\tmpo6grtwiq\runtime\python.exe" "C:\Program Files\Amazon\AWSSAMCLI\runtime\Scripts\sam.exe" --debug': The system cannot find the file specified.
### Expected result:
<!-- Describe what you expected. -->
SAM to work
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows 10 Enterprise 24 bit 21H2, OS Build 19044.2130
2. `sam --version`: (gives the error above but it's 1.62.0)
3. AWS region: ap-southeast-2
4. 'aws --version': aws-cli/2.8.12 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
5. 'python --version': Python 3.9.13
`Add --debug flag to command you are running`
| True | SAM unable to find python.exe after SAM version upgrade - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
After an upgrade from SAM version (about) v1.40.0 to v1.62.0 I receive the error message below when I try to use AWS SAM. I've been using SAM successfully for a couple of years.
Fatal error in launcher: Unable to create process using '"c:\users\containeradministrator\appdata\local\temp\tmpx0xr0ddm\runtime\python.exe" "C:\Program Files\Amazon\AWSSAMCLI\runtime\Scripts\sam.exe" ': The system cannot find the file specified.
I have tried:
- uninstalling AWS CLI v2 and AWS SAM, rebooting, reinstalling both, rebooting again.
- upgrading docker desktop, uninstalling docker desktop, reinstalling docker desktop
- Installing AWS SAM version 1.40.0, 1.46.0, 1.55,0, 1.61.0
I've never had a c:\users\containeradministrator folder, and there's no reference to it anywhere I can see.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
I'm not sure this is reproducable. On my own PC Windows 10-64 22H2 it works fine when I do the upgrade.
- Upgrade to the latest version of SAM
- Restart PC
- Run AWS SAM: "sam"
I expect this will work on most computers and that it's something odd going on.
### Observed result:
<!-- Please provide command output with `--debug` flag set. -->
sam --debug
Fatal error in launcher: Unable to create process using '"c:\users\containeradministrator\appdata\local\temp\tmpo6grtwiq\runtime\python.exe" "C:\Program Files\Amazon\AWSSAMCLI\runtime\Scripts\sam.exe" --debug': The system cannot find the file specified.
### Expected result:
<!-- Describe what you expected. -->
SAM to work
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows 10 Enterprise 24 bit 21H2, OS Build 19044.2130
2. `sam --version`: (gives the error above but it's 1.62.0)
3. AWS region: ap-southeast-2
4. 'aws --version': aws-cli/2.8.12 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
5. 'python --version': Python 3.9.13
`Add --debug flag to command you are running`
| main | sam unable to find python exe after sam version upgrade make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description after an upgrade from sam version about to i receive the error message below when i try to use aws sam i ve been using sam successfully for a couple of years fatal error in launcher unable to create process using c users containeradministrator appdata local temp runtime python exe c program files amazon awssamcli runtime scripts sam exe the system cannot find the file specified i have tried uninstalling aws cli and aws sam rebooting reinstalling both rebooting again upgrading docker desktop uninstalling docker desktop reinstalling docker desktop installing aws sam version i ve never had a c users containeradministrator folder and there s no reference to it anywhere i can see steps to reproduce i m not sure this is reproducable on my own pc windows it works fine when i do the upgrade upgrade to the latest version of sam restart pc run aws sam sam i expect this will work on most computers and that it s something odd going on observed result sam debug fatal error in launcher unable to create process using c users containeradministrator appdata local temp runtime python exe c program files amazon awssamcli runtime scripts sam exe debug the system cannot find the file specified expected result sam to work additional environment details ex windows mac amazon linux etc os windows enterprise bit os build sam version gives the error above but it s aws region ap southeast aws version aws cli python windows exe prompt off python version python add debug flag to command you are running | 1 |
2,723 | 9,607,837,706 | IssuesEvent | 2019-05-11 22:53:22 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | "You shouldn't have this spell" when clicking on gauntlet echo as ghost | Bug Maintainability/Hinders improvements | Issue reported from Round ID: 107632 (actually a number of rounds prior to this, but on the same day) (/tg/Station Terry [EU] [100% LAG FREE] [New-Dedi])
Reporting client version: 512.1467
> You shouldn't have this spell! Something's wrong.
>
> Now orbiting the gauntlet echo.
>
> You shouldn't have this spell! Something's wrong.
Happened when I clicked on gauntlet echo of juggernaut. I was the ghost of a wraith and had examine on click enabled. | True | "You shouldn't have this spell" when clicking on gauntlet echo as ghost - Issue reported from Round ID: 107632 (actually a number of rounds prior to this, but on the same day) (/tg/Station Terry [EU] [100% LAG FREE] [New-Dedi])
Reporting client version: 512.1467
> You shouldn't have this spell! Something's wrong.
>
> Now orbiting the gauntlet echo.
>
> You shouldn't have this spell! Something's wrong.
Happened when I clicked on gauntlet echo of juggernaut. I was the ghost of a wraith and had examine on click enabled. | main | you shouldn t have this spell when clicking on gauntlet echo as ghost issue reported from round id actually a number of rounds prior to this but on the same day tg station terry reporting client version you shouldn t have this spell something s wrong now orbiting the gauntlet echo you shouldn t have this spell something s wrong happened when i clicked on gauntlet echo of juggernaut i was the ghost of a wraith and had examine on click enabled | 1 |
5,383 | 27,057,092,811 | IssuesEvent | 2023-02-13 16:53:31 | infoderm/patients | https://api.github.com/repos/infoderm/patients | closed | Do not use `useTracker` directly | refactor hooks maintainability | After implementing #552, replace remaining calls to `useTracker` by `useReactive`. We should find a way to handle both synchronous and asynchronous reactives. | True | Do not use `useTracker` directly - After implementing #552, replace remaining calls to `useTracker` by `useReactive`. We should find a way to handle both synchronous and asynchronous reactives. | main | do not use usetracker directly after implementing replace remaining calls to usetracker by usereactive we should find a way to handle both synchronous and asynchronous reactives | 1 |
240,179 | 26,254,330,969 | IssuesEvent | 2023-01-05 22:33:20 | jtimberlake/rei-cedar | https://api.github.com/repos/jtimberlake/rei-cedar | reopened | CVE-2021-3803 (High) detected in nth-check-1.0.2.tgz, nth-check-2.0.0.tgz | security vulnerability | ## CVE-2021-3803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>nth-check-1.0.2.tgz</b>, <b>nth-check-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>nth-check-1.0.2.tgz</b></p></summary>
<p>performant nth-check parser & compiler</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-styles-3.14.1.tgz (Root Library)
- cssnano-4.1.11.tgz
- cssnano-preset-default-4.0.8.tgz
- postcss-svgo-4.0.3.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **nth-check-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>nth-check-2.0.0.tgz</b></p></summary>
<p>Parses and compiles CSS nth-checks to highly optimized functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz">https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss-inline-svg/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- postcss-inline-svg-5.0.0.tgz (Root Library)
- css-select-3.1.2.tgz
- :x: **nth-check-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/rei-cedar/commit/9c0c2cadda2965ff0d2cb956635474ae9161ddfe">9c0c2cadda2965ff0d2cb956635474ae9161ddfe</a></p>
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution (nth-check): 2.0.1</p>
<p>Direct dependency fix Resolution (rollup-plugin-styles): 4.0.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2021-3803 (High) detected in nth-check-1.0.2.tgz, nth-check-2.0.0.tgz - ## CVE-2021-3803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>nth-check-1.0.2.tgz</b>, <b>nth-check-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>nth-check-1.0.2.tgz</b></p></summary>
<p>performant nth-check parser & compiler</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- rollup-plugin-styles-3.14.1.tgz (Root Library)
- cssnano-4.1.11.tgz
- cssnano-preset-default-4.0.8.tgz
- postcss-svgo-4.0.3.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **nth-check-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>nth-check-2.0.0.tgz</b></p></summary>
<p>Parses and compiles CSS nth-checks to highly optimized functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz">https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss-inline-svg/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- postcss-inline-svg-5.0.0.tgz (Root Library)
- css-select-3.1.2.tgz
- :x: **nth-check-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/rei-cedar/commit/9c0c2cadda2965ff0d2cb956635474ae9161ddfe">9c0c2cadda2965ff0d2cb956635474ae9161ddfe</a></p>
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution (nth-check): 2.0.1</p>
<p>Direct dependency fix Resolution (rollup-plugin-styles): 4.0.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_main | cve high detected in nth check tgz nth check tgz cve high severity vulnerability vulnerable libraries nth check tgz nth check tgz nth check tgz performant nth check parser compiler library home page a href path to dependency file package json path to vulnerable library node modules nth check package json dependency hierarchy rollup plugin styles tgz root library cssnano tgz cssnano preset default tgz postcss svgo tgz svgo tgz css select tgz x nth check tgz vulnerable library nth check tgz parses and compiles css nth checks to highly optimized functions library home page a href path to dependency file package json path to vulnerable library node modules postcss inline svg node modules nth check package json dependency hierarchy postcss inline svg tgz root library css select tgz x nth check tgz vulnerable library found in head commit a href found in base branch next vulnerability details nth check is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution nth check direct dependency fix resolution rollup plugin styles rescue worker helmet automatic remediation is available for this issue | 0 |
31,333 | 6,499,861,175 | IssuesEvent | 2017-08-22 23:57:43 | christine5/google-opt-out-plugin | https://api.github.com/repos/christine5/google-opt-out-plugin | closed | Not installable on Firefox 33 for Android | auto-migrated Priority-Medium Type-Defect | ```
The subject days it all... Please make it compatible!
```
Original issue reported on code.google.com by `r...@hayle.org` on 20 Sep 2014 at 4:48
| 1.0 | Not installable on Firefox 33 for Android - ```
The subject days it all... Please make it compatible!
```
Original issue reported on code.google.com by `r...@hayle.org` on 20 Sep 2014 at 4:48
| non_main | not installable on firefox for android the subject days it all please make it compatible original issue reported on code google com by r hayle org on sep at | 0 |
277,320 | 30,610,829,939 | IssuesEvent | 2023-07-23 15:33:21 | tyhal/tyhal.com | https://api.github.com/repos/tyhal/tyhal.com | closed | WS-2018-0236 (Medium) detected in mem-1.1.0.tgz | security vulnerability | ## WS-2018-0236 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary>
<p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p>
<p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p>
<p>Path to dependency file: /tyhal.com/package.json</p>
<p>Path to vulnerable library: /tmp/git/tyhal.com/node_modules/npm/node_modules/mem/package.json</p>
<p>
Dependency Hierarchy:
- npm-6.10.0.tgz (Root Library)
- libnpx-10.2.0.tgz
- yargs-11.0.0.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tyhal/tyhal.com/commit/2184255eb9c4f417b3cf268fd7294259c52fcbb9">2184255eb9c4f417b3cf268fd7294259c52fcbb9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In nodejs-mem before version 4.0.0 there is a memory leak due to old results not being removed from the cache despite reaching maxAge. Exploitation of this can lead to exhaustion of memory and subsequent denial of service.
<p>Publish Date: 2019-05-30
<p>URL: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1623744>WS-2018-0236</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1623744">https://bugzilla.redhat.com/show_bug.cgi?id=1623744</a></p>
<p>Release Date: 2019-05-30</p>
<p>Fix Resolution: 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2018-0236 (Medium) detected in mem-1.1.0.tgz - ## WS-2018-0236 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mem-1.1.0.tgz</b></p></summary>
<p>Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input</p>
<p>Library home page: <a href="https://registry.npmjs.org/mem/-/mem-1.1.0.tgz">https://registry.npmjs.org/mem/-/mem-1.1.0.tgz</a></p>
<p>Path to dependency file: /tyhal.com/package.json</p>
<p>Path to vulnerable library: /tmp/git/tyhal.com/node_modules/npm/node_modules/mem/package.json</p>
<p>
Dependency Hierarchy:
- npm-6.10.0.tgz (Root Library)
- libnpx-10.2.0.tgz
- yargs-11.0.0.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tyhal/tyhal.com/commit/2184255eb9c4f417b3cf268fd7294259c52fcbb9">2184255eb9c4f417b3cf268fd7294259c52fcbb9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In nodejs-mem before version 4.0.0 there is a memory leak due to old results not being removed from the cache despite reaching maxAge. Exploitation of this can lead to exhaustion of memory and subsequent denial of service.
<p>Publish Date: 2019-05-30
<p>URL: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1623744>WS-2018-0236</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1623744">https://bugzilla.redhat.com/show_bug.cgi?id=1623744</a></p>
<p>Release Date: 2019-05-30</p>
<p>Fix Resolution: 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws medium detected in mem tgz ws medium severity vulnerability vulnerable library mem tgz memoize functions an optimization used to speed up consecutive function calls by caching the result of calls with identical input library home page a href path to dependency file tyhal com package json path to vulnerable library tmp git tyhal com node modules npm node modules mem package json dependency hierarchy npm tgz root library libnpx tgz yargs tgz os locale tgz x mem tgz vulnerable library found in head commit a href vulnerability details in nodejs mem before version there is a memory leak due to old results not being removed from the cache despite reaching maxage exploitation of this can lead to exhaustion of memory and subsequent denial of service publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
3,599 | 14,538,160,997 | IssuesEvent | 2020-12-15 10:06:25 | adda-team/adda | https://api.github.com/repos/adda-team/adda | closed | Enforce flushing (adjust buffering) of stdout | OS-Linux OS-Windows comp-UI maintainability pri-Medium | ### Is your feature request related to a problem? Please describe.
During the development of [ADDA GUI](https://github.com/adda-team/adda-gui) there is a problem that java cannot fully emulate the terminal for `stdout` stream of ADDA. Hence, the output of ADDA to `stdout` (which was originally designed to indicate the progress of the simulation) is not shown until a sufficiently large block is gathered (e.g., a hundred of iterations). This is inconvenient, since there is an impression that GUI hangs without any progress. This happens on Windows for ADDA compiled with MinGW (but not CygWin).
Exactly the same problem occurs when `adda_mpi` is used on the same system with `mpiexec` (again not on Unix).
The problem is related to buffering of `stdout`. As explained [here](https://stackoverflow.com/a/13933741) with citations from C99 standard, the latter mandates `stdout` to be fully buffered if the program is sure that it comes not from an interactive terminal. But it is either not buffered or line-buffered otherwise. And the same is true for `stderr` in all cases (irrespective of redirection). So the examples above are such that Windows (more specifically, `gnulib` library in MinGW) considers them as "without terminal".
### Describe the solution you'd like
Turn on flushing of stdout (reduce buffering) either always or through a command line option.
The most natural solution is using `setvbuf(stdout,NULL,mode,size)`, where `mode` is one of `_IOFBF`, `_IOLBF`, `_IONBF` for full, line, and no buffer, respectively, `size` is the buffer size. It is not clear, however, if this command can be run in the middle of the execution or only in the beginning (when nothing else is written to `stdout`).
### Related limitations of MinGW
On Unix systems `setvbuf(stdout,NULL,_IOLBF,0)` works fine leading to line buffering with the default buffer size (although behavior for zero size is implementation-dependent according to C99 standard). However, it does not work for MinGW:
- `_IOLBF` is not supported (9.1002 in [this list](https://www.gnu.org/software/gnulib/manual/gnulib.txt)) - it behaves the same as `_IOFBF`. This is the limitation of Windows API, so Intel compiler on Windows will probably behave the same. Thus, the only option to reduce bufferization is to use `_IONBF`.
- zero `size` leads to zero buffer. But that is not a big deal in view of the previous point.
Still, it is not clear whether we can use size=0 on all systems except MinGW. Better to specify the size explicitly (as recommended by the standard). But obvious choice from the standard (`BUFSIZ`) can be smaller than the default one. Other options (specified [here](https://www.gnu.org/software/libc/manual/html_node/Controlling-Buffering.html)) are not completely portable either.
### Performance consideration
To test the potential impact of proposed changes we devised the test
```
adda -shape box -eps 13 -size 1 -grid 8 -m 0 6
```
which leads to about 1500 iterations (and similar number of output lines) in less than 1 second. Then we compared "Internal fields:" times, since they are not affected by fluctuations in initialization times.
- **On a Unix node of a cluster** (run in non-interactive regime) default ADDA and full buffering took 0.40 s, while line and no buffering - 0.41 s (marginal difference). If `stdout` was additionally redirected to a file, the times were 0.40 and 0.42 s, respectively. Note that non-interactive regime implies that `stdout` is redirected to file anyway.
- Next we did the same test on the **login node of the same cluster** (connected with ssh terminal). With stdout going to the terminal the times were approximately (intra-sample deviations are larger) 0.33 s for full buffering and 0.37 s for other three regimes. With redirection of stdout it was 0.32 s for default and full buffering, 0.34 s for line and 0.35 s for no buffering. So we see switching of the default regime due to redirection, but overall the terminal is very fast (compared to the other ones below).
- The test on **Windows laptop (MinGW) using the console of FAR Manager**. Here, as expected, line buffering is always equivalent to the full one (we used size=4096 for the latter). The full and no buffering take 0.32 and 3.7 s, respectively. If `stdout` is redirected to a file, it takes 0.29 and 0.45 s, respectively. The default buffering behaves as full or no buffering with and without redirection, respectively. The conclusion is that 1) terminal is extremely slow 2) writing to file is also significantly affected by internal ADDA buffering. Completely the same results were obtained using standard `cmd.exe` console.
- **Same as previous, but in MPI mode** - using `mpiexec -n 1 ./adda_mpi ...` (Windows MPI was used). The results are very similar except that the default buffering is now equivalent to the full one irrespective of the redirection (note that this is not the case for Unix version). And the time for no buffering without redirection varied between 1.4 (more often) and 2.2, indicating potential buffering by `mpiexec`.
### Intermediate conclusions
- Some terminals can be very slow. Although we tested only fast (ssh) terminal for Unix, the latter has slow (more graphical) terminals too - see [here](https://stackoverflow.com/a/3860319). It is not clear, however, how slow they will be for line buffering (properly implemented). Redirection to file solves this problem for current version of ADDA. This is worth mentioning in the manual/wikis, e.g. for cases when a lot of ADDA runs are performed through a script with all output going to the terminal.
- Currently, problems with interactivity (no output for a long time in wrapper scripts/gui) seem to be limited to Windows platform, but there are no fundamental reasons for that (i.e., we may expect some problems in other cases as well).
- Current default behavior on Unix is good enough for speed. We may make the line buffering default one for Windows (will solve both interactivity and speed problems), but this need to be emulated (placing a lot of `fflush(stdout)` surrounded by tests). This can be accompanied by terminal detection (relatively portable options [seems possible](https://stackoverflow.com/questions/1312922/detect-if-stdin-is-a-terminal-or-pipe)) to have complete emulation of (proper) Unix behavior.
- Introducing new command line option, like `-buffer [no|line|full|def]` is the most robust solution. But it is not clear whether `setvbuf` works after some output is already produced (may be required for processing command line). Need further testing. And this can be combined with the previous one.
- It seems that existing calls of `fflush(stderr)` throughout the code are redundant. But there are only several of them, so let them be for extra certainty. However, we may consider flushing everything (`fflush(NULL)`) before producing the error message. It happens anyway at `exit()` but error message can be potentially misplaced relative to the other output.
| True | Enforce flushing (adjust buffering) of stdout - ### Is your feature request related to a problem? Please describe.
During the development of [ADDA GUI](https://github.com/adda-team/adda-gui) there is a problem that java cannot fully emulate the terminal for `stdout` stream of ADDA. Hence, the output of ADDA to `stdout` (which was originally designed to indicate the progress of the simulation) is not shown until a sufficiently large block is gathered (e.g., a hundred of iterations). This is inconvenient, since there is an impression that GUI hangs without any progress. This happens on Windows for ADDA compiled with MinGW (but not CygWin).
Exactly the same problem occurs when `adda_mpi` is used on the same system with `mpiexec` (again not on Unix).
The problem is related to buffering of `stdout`. As explained [here](https://stackoverflow.com/a/13933741) with citations from C99 standard, the latter mandates `stdout` to be fully buffered if the program is sure that it comes not from an interactive terminal. But it is either not buffered or line-buffered otherwise. And the same is true for `stderr` in all cases (irrespective of redirection). So the examples above are such that Windows (more specifically, `gnulib` library in MinGW) considers them as "without terminal".
### Describe the solution you'd like
Turn on flushing of stdout (reduce buffering) either always or through a command line option.
The most natural solution is using `setvbuf(stdout,NULL,mode,size)`, where `mode` is one of `_IOFBF`, `_IOLBF`, `_IONBF` for full, line, and no buffer, respectively, `size` is the buffer size. It is not clear, however, if this command can be run in the middle of the execution or only in the beginning (when nothing else is written to `stdout`).
### Related limitations of MinGW
On Unix systems `setvbuf(stdout,NULL,_IOLBF,0)` works fine leading to line buffering with the default buffer size (although behavior for zero size is implementation-dependent according to C99 standard). However, it does not work for MinGW:
- `_IOLBF` is not supported (9.1002 in [this list](https://www.gnu.org/software/gnulib/manual/gnulib.txt)) - it behaves the same as `_IOFBF`. This is the limitation of Windows API, so Intel compiler on Windows will probably behave the same. Thus, the only option to reduce bufferization is to use `_IONBF`.
- zero `size` leads to zero buffer. But that is not a big deal in view of the previous point.
Still, it is not clear whether we can use size=0 on all systems except MinGW. Better to specify the size explicitly (as recommended by the standard). But obvious choice from the standard (`BUFSIZ`) can be smaller than the default one. Other options (specified [here](https://www.gnu.org/software/libc/manual/html_node/Controlling-Buffering.html)) are not completely portable either.
### Performance consideration
To test the potential impact of proposed changes we devised the test
```
adda -shape box -eps 13 -size 1 -grid 8 -m 0 6
```
which leads to about 1500 iterations (and similar number of output lines) in less than 1 second. Then we compared "Internal fields:" times, since they are not affected by fluctuations in initialization times.
- **On a Unix node of a cluster** (run in non-interactive regime) default ADDA and full buffering took 0.40 s, while line and no buffering - 0.41 s (marginal difference). If `stdout` was additionally redirected to a file, the times were 0.40 and 0.42 s, respectively. Note that non-interactive regime implies that `stdout` is redirected to file anyway.
- Next we did the same test on the **login node of the same cluster** (connected with ssh terminal). With stdout going to the terminal the times were approximately (intra-sample deviations are larger) 0.33 s for full buffering and 0.37 s for other three regimes. With redirection of stdout it was 0.32 s for default and full buffering, 0.34 s for line and 0.35 s for no buffering. So we see switching of the default regime due to redirection, but overall the terminal is very fast (compared to the other ones below).
- The test on **Windows laptop (MinGW) using the console of FAR Manager**. Here, as expected, line buffering is always equivalent to the full one (we used size=4096 for the latter). The full and no buffering take 0.32 and 3.7 s, respectively. If `stdout` is redirected to a file, it takes 0.29 and 0.45 s, respectively. The default buffering behaves as full or no buffering with and without redirection, respectively. The conclusion is that 1) terminal is extremely slow 2) writing to file is also significantly affected by internal ADDA buffering. Completely the same results were obtained using standard `cmd.exe` console.
- **Same as previous, but in MPI mode** - using `mpiexec -n 1 ./adda_mpi ...` (Windows MPI was used). The results are very similar except that the default buffering is now equivalent to the full one irrespective of the redirection (note that this is not the case for Unix version). And the time for no buffering without redirection varied between 1.4 (more often) and 2.2, indicating potential buffering by `mpiexec`.
### Intermediate conclusions
- Some terminals can be very slow. Although we tested only fast (ssh) terminal for Unix, the latter has slow (more graphical) terminals too - see [here](https://stackoverflow.com/a/3860319). It is not clear, however, how slow they will be for line buffering (properly implemented). Redirection to file solves this problem for current version of ADDA. This is worth mentioning in the manual/wikis, e.g. for cases when a lot of ADDA runs are performed through a script with all output going to the terminal.
- Currently, problems with interactivity (no output for a long time in wrapper scripts/gui) seem to be limited to Windows platform, but there are no fundamental reasons for that (i.e., we may expect some problems in other cases as well).
- Current default behavior on Unix is good enough for speed. We may make the line buffering default one for Windows (will solve both interactivity and speed problems), but this need to be emulated (placing a lot of `fflush(stdout)` surrounded by tests). This can be accompanied by terminal detection (relatively portable options [seems possible](https://stackoverflow.com/questions/1312922/detect-if-stdin-is-a-terminal-or-pipe)) to have complete emulation of (proper) Unix behavior.
- Introducing new command line option, like `-buffer [no|line|full|def]` is the most robust solution. But it is not clear whether `setvbuf` works after some output is already produced (may be required for processing command line). Need further testing. And this can be combined with the previous one.
- It seems that existing calls of `fflush(stderr)` throughout the code are redundant. But there are only several of them, so let them be for extra certainty. However, we may consider flushing everything (`fflush(NULL)`) before producing the error message. It happens anyway at `exit()` but error message can be potentially misplaced relative to the other output.
| main | enforce flushing adjust buffering of stdout is your feature request related to a problem please describe during the development of there is a problem that java cannot fully emulate the terminal for stdout stream of adda hence the output of adda to stdout which was originally designed to indicate the progress of the simulation is not shown until a sufficiently large block is gathered e g a hundred of iterations this is inconvenient since there is an impression that gui hangs without any progress this happens on windows for adda compiled with mingw but not cygwin exactly the same problem occurs when adda mpi is used on the same system with mpiexec again not on unix the problem is related to buffering of stdout as explained with citations from standard the latter mandates stdout to be fully buffered if the program is sure that it comes not from an interactive terminal but it is either not buffered or line buffered otherwise and the same is true for stderr in all cases irrespective of redirection so the examples above are such that windows more specifically gnulib library in mingw considers them as without terminal describe the solution you d like turn on flushing of stdout reduce buffering either always or through a command line option the most natural solution is using setvbuf stdout null mode size where mode is one of iofbf iolbf ionbf for full line and no buffer respectively size is the buffer size it is not clear however if this command can be run in the middle of the execution or only in the beginning when nothing else is written to stdout related limitations of mingw on unix systems setvbuf stdout null iolbf works fine leading to line buffering with the default buffer size although behavior for zero size is implementation dependent according to standard however it does not work for mingw iolbf is not supported in it behaves the same as iofbf this is the limitation of windows api so intel compiler on windows will probably behave the same thus the only option to reduce bufferization is to use ionbf zero size leads to zero buffer but that is not a big deal in view of the previous point still it is not clear whether we can use size on all systems except mingw better to specify the size explicitly as recommended by the standard but obvious choice from the standard bufsiz can be smaller than the default one other options specified are not completely portable either performance consideration to test the potential impact of proposed changes we devised the test adda shape box eps size grid m which leads to about iterations and similar number of output lines in less than second then we compared internal fields times since they are not affected by fluctuations in initialization times on a unix node of a cluster run in non interactive regime default adda and full buffering took s while line and no buffering s marginal difference if stdout was additionally redirected to a file the times were and s respectively note that non interactive regime implies that stdout is redirected to file anyway next we did the same test on the login node of the same cluster connected with ssh terminal with stdout going to the terminal the times were approximately intra sample deviations are larger s for full buffering and s for other three regimes with redirection of stdout it was s for default and full buffering s for line and s for no buffering so we see switching of the default regime due to redirection but overall the terminal is very fast compared to the other ones below the test on windows laptop mingw using the console of far manager here as expected line buffering is always equivalent to the full one we used size for the latter the full and no buffering take and s respectively if stdout is redirected to a file it takes and s respectively the default buffering behaves as full or no buffering with and without redirection respectively the conclusion is that terminal is extremely slow writing to file is also significantly affected by internal adda buffering completely the same results were obtained using standard cmd exe console same as previous but in mpi mode using mpiexec n adda mpi windows mpi was used the results are very similar except that the default buffering is now equivalent to the full one irrespective of the redirection note that this is not the case for unix version and the time for no buffering without redirection varied between more often and indicating potential buffering by mpiexec intermediate conclusions some terminals can be very slow although we tested only fast ssh terminal for unix the latter has slow more graphical terminals too see it is not clear however how slow they will be for line buffering properly implemented redirection to file solves this problem for current version of adda this is worth mentioning in the manual wikis e g for cases when a lot of adda runs are performed through a script with all output going to the terminal currently problems with interactivity no output for a long time in wrapper scripts gui seem to be limited to windows platform but there are no fundamental reasons for that i e we may expect some problems in other cases as well current default behavior on unix is good enough for speed we may make the line buffering default one for windows will solve both interactivity and speed problems but this need to be emulated placing a lot of fflush stdout surrounded by tests this can be accompanied by terminal detection relatively portable options to have complete emulation of proper unix behavior introducing new command line option like buffer is the most robust solution but it is not clear whether setvbuf works after some output is already produced may be required for processing command line need further testing and this can be combined with the previous one it seems that existing calls of fflush stderr throughout the code are redundant but there are only several of them so let them be for extra certainty however we may consider flushing everything fflush null before producing the error message it happens anyway at exit but error message can be potentially misplaced relative to the other output | 1 |
1,597 | 6,572,379,796 | IssuesEvent | 2017-09-11 01:51:52 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Pear module not verbose enough while failing | affects_2.1 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
pecl
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /home/[...]/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
$ cat ansible.cfg | grep -v '^$\|^#'
[defaults]
forks = 100
vault_password_file = ~/.vault.password
[ssh_connection]
pipelining = False
```
##### OS / ENVIRONMENT
```
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
```
##### SUMMARY
I asked for the pecl module to install something but my alternatives pointed /usr/bin/php to php7 which make the module fail without reporting why.
##### STEPS TO REPRODUCE
```
- name: apt-get install from stretch (testing)
apt: name={{item}} state=latest default_release=stretch
with_items:
- php7.0
- php-pear
- name: Use php7 as /usr/bin/php
alternatives: name=php path=/usr/bin/php7.0
- name: Install stomp PHP module
pear: name="pecl/stomp" state=present
```
I'm getting (I tried `-vvv`):
```
failed: [dev] (item=pecl/stomp) => {"failed": true, "invocation": {"module_args": {"name": "pecl/stomp", "state": "present"}, "module_name": "pear"}, "item": "pecl/stomp", "msg": "failed to install pecl/stomp"}
```
Which is not enough to tell why it failed, which looks wrong to hide this.
To diagnose I started with `pecl --help` and boom:
```
$ pecl --help
Parse error: syntax error, unexpected 'new' (T_NEW) in /usr/share/php/PEAR/Frontend.php on line 91
```
This is clearly my issue here, so I updated my alternatives to use php5 and I'm good, that's not really an issue.
BUT I'd liked a lot if ansible told me straight that the PECL module gave a `Parse error: syntax error, unexpected 'new' (T_NEW) in /usr/share/php/PEAR/Frontend.php on line 91`.
| True | Pear module not verbose enough while failing - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
pecl
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /home/[...]/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
$ cat ansible.cfg | grep -v '^$\|^#'
[defaults]
forks = 100
vault_password_file = ~/.vault.password
[ssh_connection]
pipelining = False
```
##### OS / ENVIRONMENT
```
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
```
##### SUMMARY
I asked for the pecl module to install something but my alternatives pointed /usr/bin/php to php7 which make the module fail without reporting why.
##### STEPS TO REPRODUCE
```
- name: apt-get install from stretch (testing)
apt: name={{item}} state=latest default_release=stretch
with_items:
- php7.0
- php-pear
- name: Use php7 as /usr/bin/php
alternatives: name=php path=/usr/bin/php7.0
- name: Install stomp PHP module
pear: name="pecl/stomp" state=present
```
I'm getting (I tried `-vvv`):
```
failed: [dev] (item=pecl/stomp) => {"failed": true, "invocation": {"module_args": {"name": "pecl/stomp", "state": "present"}, "module_name": "pear"}, "item": "pecl/stomp", "msg": "failed to install pecl/stomp"}
```
Which is not enough to tell why it failed, which looks wrong to hide this.
To diagnose I started with `pecl --help` and boom:
```
$ pecl --help
Parse error: syntax error, unexpected 'new' (T_NEW) in /usr/share/php/PEAR/Frontend.php on line 91
```
This is clearly my issue here, so I updated my alternatives to use php5 and I'm good, that's not really an issue.
BUT I'd liked a lot if ansible told me straight that the PECL module gave a `Parse error: syntax error, unexpected 'new' (T_NEW) in /usr/share/php/PEAR/Frontend.php on line 91`.
| main | pear module not verbose enough while failing issue type feature idea component name pecl ansible version ansible config file home ansible ansible cfg configured module search path default w o overrides configuration cat ansible cfg grep v forks vault password file vault password pipelining false os environment lsb release a no lsb modules are available distributor id debian description debian gnu linux jessie release codename jessie summary i asked for the pecl module to install something but my alternatives pointed usr bin php to which make the module fail without reporting why steps to reproduce name apt get install from stretch testing apt name item state latest default release stretch with items php pear name use as usr bin php alternatives name php path usr bin name install stomp php module pear name pecl stomp state present i m getting i tried vvv failed item pecl stomp failed true invocation module args name pecl stomp state present module name pear item pecl stomp msg failed to install pecl stomp which is not enough to tell why it failed which looks wrong to hide this to diagnose i started with pecl help and boom pecl help parse error syntax error unexpected new t new in usr share php pear frontend php on line this is clearly my issue here so i updated my alternatives to use and i m good that s not really an issue but i d liked a lot if ansible told me straight that the pecl module gave a parse error syntax error unexpected new t new in usr share php pear frontend php on line | 1 |
675,815 | 23,106,958,119 | IssuesEvent | 2022-07-27 09:37:57 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | Potential missing LogIndex after RBS | kind/bug area/docdb priority/high | Jira Link: [DB-1990](https://yugabyte.atlassian.net/browse/DB-1990)
During RBS we don't fetch the Raft log index and then during local bootstrap optimizer can skip log index recreation for some of the first raft log segments, so the Raft log index might be incomplete.
Raft log index is used at least by the following functions:
```
LogReader::LookupOpId
LogCache::LookupOpId / LogReader::ReadReplicatesInRange
LogCache::ReadOps
PeerMessageQueue::ReadFromLogCache
PeerMessageQueue::ReadReplicatedMessagesForCDC
PeerMessageQueue::RequestForPeer
PeerMessageQueue::IsOpInLog
PeerMessageQueue::ResponseFromPeer
```
Need to check if it is a legitimate issue or not. | 1.0 | Potential missing LogIndex after RBS - Jira Link: [DB-1990](https://yugabyte.atlassian.net/browse/DB-1990)
During RBS we don't fetch the Raft log index and then during local bootstrap optimizer can skip log index recreation for some of the first raft log segments, so the Raft log index might be incomplete.
Raft log index is used at least by the following functions:
```
LogReader::LookupOpId
LogCache::LookupOpId / LogReader::ReadReplicatesInRange
LogCache::ReadOps
PeerMessageQueue::ReadFromLogCache
PeerMessageQueue::ReadReplicatedMessagesForCDC
PeerMessageQueue::RequestForPeer
PeerMessageQueue::IsOpInLog
PeerMessageQueue::ResponseFromPeer
```
Need to check if it is a legitimate issue or not. | non_main | potential missing logindex after rbs jira link during rbs we don t fetch the raft log index and then during local bootstrap optimizer can skip log index recreation for some of the first raft log segments so the raft log index might be incomplete raft log index is used at least by the following functions logreader lookupopid logcache lookupopid logreader readreplicatesinrange logcache readops peermessagequeue readfromlogcache peermessagequeue readreplicatedmessagesforcdc peermessagequeue requestforpeer peermessagequeue isopinlog peermessagequeue responsefrompeer need to check if it is a legitimate issue or not | 0 |
2,434 | 8,621,412,897 | IssuesEvent | 2018-11-20 17:16:21 | OXIDprojects/oxid-module-internals | https://api.github.com/repos/OXIDprojects/oxid-module-internals | closed | duplicate code | maintainability pull request | As Maintainer I like to remove the effort in maintaining and to increase the quality by removeing dublicate code.
- I found duplicate code in CheckConsistency.php | True | duplicate code - As Maintainer I like to remove the effort in maintaining and to increase the quality by removeing dublicate code.
- I found duplicate code in CheckConsistency.php | main | duplicate code as maintainer i like to remove the effort in maintaining and to increase the quality by removeing dublicate code i found duplicate code in checkconsistency php | 1 |
736,456 | 25,475,050,331 | IssuesEvent | 2022-11-25 13:42:02 | canonical/canonical.com | https://api.github.com/repos/canonical/canonical.com | closed | Job role id is null for some applications | Priority: High | ## Summary
There are cases when the job role id is null. This is if the candidate doesn't apply and they are referred.
We should not rely on this field for rendering the application page
## Process
- Open [this link](https://pastebin.canonical.com/p/qvsk2QY7Ss/) and visit the candidate
- Visit there application page in custom application fields
- See 500
| 1.0 | Job role id is null for some applications - ## Summary
There are cases when the job role id is null. This is if the candidate doesn't apply and they are referred.
We should not rely on this field for rendering the application page
## Process
- Open [this link](https://pastebin.canonical.com/p/qvsk2QY7Ss/) and visit the candidate
- Visit there application page in custom application fields
- See 500
| non_main | job role id is null for some applications summary there are cases when the job role id is null this is if the candidate doesn t apply and they are referred we should not rely on this field for rendering the application page process open and visit the candidate visit there application page in custom application fields see | 0 |
5,777 | 30,622,840,730 | IssuesEvent | 2023-07-24 09:26:02 | riadvice/bbbeasy | https://api.github.com/repos/riadvice/bbbeasy | closed | Refactor presets data processing by using PresetData class | type: enhancement scope: maintainablity scope: quality | **What is wrong with the current implementation? Please describe.**
We are using too much tables with non referenced keys. It is very easy to mis-type a key while processing presets.
**Describe the solution you'd like**
- Use the `PresetData` class to store and fetch presets data.
- Use the Enum classes properties. | True | Refactor presets data processing by using PresetData class - **What is wrong with the current implementation? Please describe.**
We are using too much tables with non referenced keys. It is very easy to mis-type a key while processing presets.
**Describe the solution you'd like**
- Use the `PresetData` class to store and fetch presets data.
- Use the Enum classes properties. | main | refactor presets data processing by using presetdata class what is wrong with the current implementation please describe we are using too much tables with non referenced keys it is very easy to mis type a key while processing presets describe the solution you d like use the presetdata class to store and fetch presets data use the enum classes properties | 1 |
199,897 | 15,081,496,363 | IssuesEvent | 2021-02-05 13:17:38 | nspcc-dev/neo-go | https://api.github.com/repos/nspcc-dev/neo-go | opened | NNS test race | bug test | Happens occasionally:
```
helper_test.go:584:
Error Trace: helper_test.go:584
native_name_service_test.go:398
native_name_service_test.go:385
native_name_service_test.go:165
Error: Not equal:
expected: &stackitem.Map{value:[]stackitem.MapElement{stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc00020f400), Value:(*stackitem.ByteArray)(0xc00020f420)}, stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc00020f480), Value:(*stackitem.ByteArray)(0xc00020f4a0)}, stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc00020f4e0), Value:(*stackitem.BigInteger)(0xc000010458)}}}
actual : &stackitem.Map{value:[]stackitem.MapElement{stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc000113a20), Value:(*stackitem.ByteArray)(0xc000113a40)}, stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc000113aa0), Value:(*stackitem.ByteArray)(0xc000113ac0)}, stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc000113b00), Value:(*stackitem.BigInteger)(0xc000010540)}}}
Diff:
--- Expected
+++ Actual
@@ -35,3 +35,3 @@
abs: (big.nat) (len=1) {
- (big.Word) 1644066964
+ (big.Word) 1644066963
}
Test: TestRegisterAndRenew
logger.go:130: 2021-02-05T16:16:04.011+0300 INFO blockchain persist completed {"persistedBlocks": 18, "persistedKeys": 473, "headerHeight": 18, "blockHeight": 18, "took": "274.453µs"}
FAIL
``` | 1.0 | NNS test race - Happens occasionally:
```
helper_test.go:584:
Error Trace: helper_test.go:584
native_name_service_test.go:398
native_name_service_test.go:385
native_name_service_test.go:165
Error: Not equal:
expected: &stackitem.Map{value:[]stackitem.MapElement{stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc00020f400), Value:(*stackitem.ByteArray)(0xc00020f420)}, stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc00020f480), Value:(*stackitem.ByteArray)(0xc00020f4a0)}, stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc00020f4e0), Value:(*stackitem.BigInteger)(0xc000010458)}}}
actual : &stackitem.Map{value:[]stackitem.MapElement{stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc000113a20), Value:(*stackitem.ByteArray)(0xc000113a40)}, stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc000113aa0), Value:(*stackitem.ByteArray)(0xc000113ac0)}, stackitem.MapElement{Key:(*stackitem.ByteArray)(0xc000113b00), Value:(*stackitem.BigInteger)(0xc000010540)}}}
Diff:
--- Expected
+++ Actual
@@ -35,3 +35,3 @@
abs: (big.nat) (len=1) {
- (big.Word) 1644066964
+ (big.Word) 1644066963
}
Test: TestRegisterAndRenew
logger.go:130: 2021-02-05T16:16:04.011+0300 INFO blockchain persist completed {"persistedBlocks": 18, "persistedKeys": 473, "headerHeight": 18, "blockHeight": 18, "took": "274.453µs"}
FAIL
``` | non_main | nns test race happens occasionally helper test go error trace helper test go native name service test go native name service test go native name service test go error not equal expected stackitem map value stackitem mapelement stackitem mapelement key stackitem bytearray value stackitem bytearray stackitem mapelement key stackitem bytearray value stackitem bytearray stackitem mapelement key stackitem bytearray value stackitem biginteger actual stackitem map value stackitem mapelement stackitem mapelement key stackitem bytearray value stackitem bytearray stackitem mapelement key stackitem bytearray value stackitem bytearray stackitem mapelement key stackitem bytearray value stackitem biginteger diff expected actual abs big nat len big word big word test testregisterandrenew logger go info blockchain persist completed persistedblocks persistedkeys headerheight blockheight took fail | 0 |
5,553 | 27,794,837,705 | IssuesEvent | 2023-03-17 11:43:06 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Bug when adding and deleting rows. | type: bug work: frontend status: ready restricted: maintainers | ## Description
<!-- A clear and concise description of what the bug is. -->
Weird bugs while adding new rows and deleting them in the table.
When we add more that 2 new rows in the table at once, and then try to fill it it works. But deleting any row after these operation causes the page to buffer. Though we find the row deleted when we refresh the page.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
There should be no infinite buffering when deleting rows after adding many of them.
## To Reproduce
I am attaching a video:
[Screencast from 13-02-23 01:10:04 AM IST.webm](https://user-images.githubusercontent.com/120925871/218361008-d44a2c05-d26f-412a-a8dc-242699a2128e.webm)
| True | Bug when adding and deleting rows. - ## Description
<!-- A clear and concise description of what the bug is. -->
Weird bugs while adding new rows and deleting them in the table.
When we add more that 2 new rows in the table at once, and then try to fill it it works. But deleting any row after these operation causes the page to buffer. Though we find the row deleted when we refresh the page.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
There should be no infinite buffering when deleting rows after adding many of them.
## To Reproduce
I am attaching a video:
[Screencast from 13-02-23 01:10:04 AM IST.webm](https://user-images.githubusercontent.com/120925871/218361008-d44a2c05-d26f-412a-a8dc-242699a2128e.webm)
| main | bug when adding and deleting rows description weird bugs while adding new rows and deleting them in the table when we add more that new rows in the table at once and then try to fill it it works but deleting any row after these operation causes the page to buffer though we find the row deleted when we refresh the page expected behavior there should be no infinite buffering when deleting rows after adding many of them to reproduce i am attaching a video | 1 |
707,830 | 24,320,873,020 | IssuesEvent | 2022-09-30 10:36:13 | grpc/grpc | https://api.github.com/repos/grpc/grpc | opened | grpc_extra_deps disallows to define a version of go toolchain | kind/bug lang/core priority/P2 untriaged | ### What version of gRPC and what language are you using?
gRPC version : `1.49.1`
Go version : `1.18`
### What operating system (Linux, Windows,...) and version?
Ubuntu 22.04
### What runtime / compiler are you using (e.g. python version or version of gcc)
Go version : `1.18`
Bazel : `5.3.0`
### What did you do?
Consider following WORKSPACE file :
```
workspace(name = "test")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_go",
sha256 = "099a9fb96a376ccbbb7d291ed4ecbdfd42f6bc822ab77ae6f1b5cb9e914e94fa",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.35.0/rules_go-v0.35.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.35.0/rules_go-v0.35.0.zip",
],
)
load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies", "go_download_sdk")
go_download_sdk(
name = "go_sdk",
version = "1.18",
)
go_rules_dependencies()
go_register_toolchains()
http_archive(
name = "bazel_gazelle",
sha256 = "5982e5463f171da99e3bdaeff8c0f48283a7a5f396ec5282910b9e8a49c0dd7e",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-gazelle/releases/download/v0.25.0/bazel-gazelle-v0.25.0.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.25.0/bazel-gazelle-v0.25.0.tar.gz",
],
)
load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies")
load("//:deps.bzl", "go_dependencies")
# gazelle:repository_macro deps.bzl%go_dependencies
go_dependencies()
gazelle_dependencies()
http_archive(
name = "com_github_grpc_grpc",
sha256 = "",
strip_prefix = "grpc-1.49.1",
urls = [
"https://github.com/grpc/grpc/archive/v1.49.1.zip",
],
)
load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")
grpc_deps()
load("@com_github_grpc_grpc//bazel:grpc_extra_deps.bzl", "grpc_extra_deps")
grpc_extra_deps()
```
I want to define my own version of the go toolchains and use gazelle to generate bazel build files for my go code.
### What did you expect to see?
```
bazel build //:gazelle
DEBUG:
INFO: Analyzed target //:gazelle (90 packages loaded, 10076 targets configured).
INFO: Found 1 target...
Target //:gazelle up-to-date:
bazel-bin/gazelle-runner.bash
bazel-bin/gazelle
INFO: Elapsed time: 18.606s, Critical Path: 9.76s
INFO: 43 processes: 2 internal, 41 linux-sandbox.
INFO: Build completed successfully, 43 total actions
```
### What did you see instead?
```
bazel build //:gazelle
Starting local Bazel server and connecting to it...
ERROR: Traceback (most recent call last):
File "/home/lucas/projects/grpc-deps-minimal-demo/WORKSPACE", line 58, column 16, in <toplevel>
grpc_extra_deps()
File "/home/lucas/.cache/bazel/_bazel_lucas/03e5ee3de23111152f8b4bc57e72dd15/external/com_github_grpc_grpc/bazel/grpc_extra_deps.bzl", line 56, column 27, in grpc_extra_deps
go_register_toolchains(version = "1.18")
File "/home/lucas/.cache/bazel/_bazel_lucas/03e5ee3de23111152f8b4bc57e72dd15/external/io_bazel_rules_go/go/private/sdk.bzl", line 409, column 13, in go_register_toolchains
fail("go_register_toolchains: version set after go sdk rule declared ({})".format(", ".join([r["name"] for r in sdk_rules])))
Error in fail: go_register_toolchains: version set after go sdk rule declared (go_sdk)
ERROR: error loading package 'external': Package 'external' contains errors
INFO: Elapsed time: 1.489s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
```
This issue is a slightly simplified version of #30965 | 1.0 | grpc_extra_deps disallows to define a version of go toolchain - ### What version of gRPC and what language are you using?
gRPC version : `1.49.1`
Go version : `1.18`
### What operating system (Linux, Windows,...) and version?
Ubuntu 22.04
### What runtime / compiler are you using (e.g. python version or version of gcc)
Go version : `1.18`
Bazel : `5.3.0`
### What did you do?
Consider following WORKSPACE file :
```
workspace(name = "test")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_go",
sha256 = "099a9fb96a376ccbbb7d291ed4ecbdfd42f6bc822ab77ae6f1b5cb9e914e94fa",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.35.0/rules_go-v0.35.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.35.0/rules_go-v0.35.0.zip",
],
)
load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies", "go_download_sdk")
go_download_sdk(
name = "go_sdk",
version = "1.18",
)
go_rules_dependencies()
go_register_toolchains()
http_archive(
name = "bazel_gazelle",
sha256 = "5982e5463f171da99e3bdaeff8c0f48283a7a5f396ec5282910b9e8a49c0dd7e",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-gazelle/releases/download/v0.25.0/bazel-gazelle-v0.25.0.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.25.0/bazel-gazelle-v0.25.0.tar.gz",
],
)
load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies")
load("//:deps.bzl", "go_dependencies")
# gazelle:repository_macro deps.bzl%go_dependencies
go_dependencies()
gazelle_dependencies()
http_archive(
name = "com_github_grpc_grpc",
sha256 = "",
strip_prefix = "grpc-1.49.1",
urls = [
"https://github.com/grpc/grpc/archive/v1.49.1.zip",
],
)
load("@com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")
grpc_deps()
load("@com_github_grpc_grpc//bazel:grpc_extra_deps.bzl", "grpc_extra_deps")
grpc_extra_deps()
```
I want to define my own version of the go toolchains and use gazelle to generate bazel build files for my go code.
### What did you expect to see?
```
bazel build //:gazelle
DEBUG:
INFO: Analyzed target //:gazelle (90 packages loaded, 10076 targets configured).
INFO: Found 1 target...
Target //:gazelle up-to-date:
bazel-bin/gazelle-runner.bash
bazel-bin/gazelle
INFO: Elapsed time: 18.606s, Critical Path: 9.76s
INFO: 43 processes: 2 internal, 41 linux-sandbox.
INFO: Build completed successfully, 43 total actions
```
### What did you see instead?
```
bazel build //:gazelle
Starting local Bazel server and connecting to it...
ERROR: Traceback (most recent call last):
File "/home/lucas/projects/grpc-deps-minimal-demo/WORKSPACE", line 58, column 16, in <toplevel>
grpc_extra_deps()
File "/home/lucas/.cache/bazel/_bazel_lucas/03e5ee3de23111152f8b4bc57e72dd15/external/com_github_grpc_grpc/bazel/grpc_extra_deps.bzl", line 56, column 27, in grpc_extra_deps
go_register_toolchains(version = "1.18")
File "/home/lucas/.cache/bazel/_bazel_lucas/03e5ee3de23111152f8b4bc57e72dd15/external/io_bazel_rules_go/go/private/sdk.bzl", line 409, column 13, in go_register_toolchains
fail("go_register_toolchains: version set after go sdk rule declared ({})".format(", ".join([r["name"] for r in sdk_rules])))
Error in fail: go_register_toolchains: version set after go sdk rule declared (go_sdk)
ERROR: error loading package 'external': Package 'external' contains errors
INFO: Elapsed time: 1.489s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
```
This issue is a slightly simplified version of #30965 | non_main | grpc extra deps disallows to define a version of go toolchain what version of grpc and what language are you using grpc version go version what operating system linux windows and version ubuntu what runtime compiler are you using e g python version or version of gcc go version bazel what did you do consider following workspace file workspace name test load bazel tools tools build defs repo http bzl http archive http archive name io bazel rules go urls load io bazel rules go go deps bzl go register toolchains go rules dependencies go download sdk go download sdk name go sdk version go rules dependencies go register toolchains http archive name bazel gazelle urls load bazel gazelle deps bzl gazelle dependencies load deps bzl go dependencies gazelle repository macro deps bzl go dependencies go dependencies gazelle dependencies http archive name com github grpc grpc strip prefix grpc urls load com github grpc grpc bazel grpc deps bzl grpc deps grpc deps load com github grpc grpc bazel grpc extra deps bzl grpc extra deps grpc extra deps i want to define my own version of the go toolchains and use gazelle to generate bazel build files for my go code what did you expect to see bazel build gazelle debug info analyzed target gazelle packages loaded targets configured info found target target gazelle up to date bazel bin gazelle runner bash bazel bin gazelle info elapsed time critical path info processes internal linux sandbox info build completed successfully total actions what did you see instead bazel build gazelle starting local bazel server and connecting to it error traceback most recent call last file home lucas projects grpc deps minimal demo workspace line column in grpc extra deps file home lucas cache bazel bazel lucas external com github grpc grpc bazel grpc extra deps bzl line column in grpc extra deps go register toolchains version file home lucas cache bazel bazel lucas external io bazel rules go go private sdk bzl line column in go register toolchains fail go register toolchains version set after go sdk rule declared format join for r in sdk rules error in fail go register toolchains version set after go sdk rule declared go sdk error error loading package external package external contains errors info elapsed time info processes failed build did not complete successfully packages loaded this issue is a slightly simplified version of | 0 |
1,703 | 6,574,397,594 | IssuesEvent | 2017-09-11 12:44:42 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | EC2 Instance termination fails with SSH error | affects_1.9 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
EC2 module
##### ANSIBLE VERSION
```
$ ansible --version
ansible 1.9.3
configured module search path = None
```
##### CONFIGURATION
This job is running from Ansible Tower
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
I have a playbook that launches an instance runs some tasks and later tries to terminate the instance. Getting inconsistent behaviour on termination, sometimes it suceeds other times it fail with an ssh connection error.
##### STEPS TO REPRODUCE
Run a playbook like the following
```
---
- name: Create Datomic Instance
hosts: localhost
vars_files:
- group_vars/datomic
tasks:
- name: "ansible: Create Instance"
ec2:
count: 1
assign_public_ip: no
wait: yes
state: "present"
key_name: "ansible-2016-06-03"
region: "us-east-1"
image: "ami-f25fcfe5"
instance_type: "m4.large"
tenancy: "default"
group: "{{ security_group }}"
vpc_subnet_id: "{{ subnet_id }}"
instance_profile_name: "datomic"
instance_tags:
Name: "datomic"
register: ec2
- name: "ansible: Add to Host Group"
add_host:
hostname: "{{ item.private_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
- name: "ansible: Wait for SSH port to come up"
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
- pause: seconds=30
- name: Terminate Datomic Instance
hosts: localhost
connection: local
tasks:
- name: "post: Terminate Datomic Instance"
local_action:
module: ec2
state: "absent"
instance_ids: "{{ ec2.instance_ids }}"
region: "us-east-1"
retries: 3
delay: 1
```
##### EXPECTED RESULTS
Successful termination!
##### ACTUAL RESULTS
Throws a python error about SSH connection termination
```
TASK: [post: Terminate Datomic Instance] *********************************
changed: [localhost -> 127.0.0.1]
PLAY RECAP ********************************************************************
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 324, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/bin/ansible-playbook", line 268, in main
playbook_cb.on_stats(pb.stats)
File "/usr/lib/pymodules/python2.7/ansible/callbacks.py", line 724, in on_stats
call_callback_module('playbook_on_stats', stats)
File "/usr/lib/pymodules/python2.7/ansible/callbacks.py", line 179, in call_callback_module
method(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/awx/plugins/callback/job_event_callback.py", line 447, in playbook_on_stats
self.terminate_ssh_control_masters()
File "/usr/lib/python2.7/dist-packages/awx/plugins/callback/job_event_callback.py", line 279, in terminate_ssh_control_masters
proc.terminate()
File "/usr/lib/python2.7/dist-packages/psutil/__init__.py", line 904, in terminate
self.send_signal(signal.SIGTERM)
File "/usr/lib/python2.7/dist-packages/psutil/__init__.py", line 173, in wrapper
raise NoSuchProcess(self.pid, self._platform_impl._process_name)
psutil._error.NoSuchProcess: process no longer exists (pid=1024, name='ssh')
```
| True | EC2 Instance termination fails with SSH error - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
EC2 module
##### ANSIBLE VERSION
```
$ ansible --version
ansible 1.9.3
configured module search path = None
```
##### CONFIGURATION
This job is running from Ansible Tower
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
I have a playbook that launches an instance runs some tasks and later tries to terminate the instance. Getting inconsistent behaviour on termination, sometimes it suceeds other times it fail with an ssh connection error.
##### STEPS TO REPRODUCE
Run a playbook like the following
```
---
- name: Create Datomic Instance
hosts: localhost
vars_files:
- group_vars/datomic
tasks:
- name: "ansible: Create Instance"
ec2:
count: 1
assign_public_ip: no
wait: yes
state: "present"
key_name: "ansible-2016-06-03"
region: "us-east-1"
image: "ami-f25fcfe5"
instance_type: "m4.large"
tenancy: "default"
group: "{{ security_group }}"
vpc_subnet_id: "{{ subnet_id }}"
instance_profile_name: "datomic"
instance_tags:
Name: "datomic"
register: ec2
- name: "ansible: Add to Host Group"
add_host:
hostname: "{{ item.private_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
- name: "ansible: Wait for SSH port to come up"
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
- pause: seconds=30
- name: Terminate Datomic Instance
hosts: localhost
connection: local
tasks:
- name: "post: Terminate Datomic Instance"
local_action:
module: ec2
state: "absent"
instance_ids: "{{ ec2.instance_ids }}"
region: "us-east-1"
retries: 3
delay: 1
```
##### EXPECTED RESULTS
Successful termination!
##### ACTUAL RESULTS
Throws a python error about SSH connection termination
```
TASK: [post: Terminate Datomic Instance] *********************************
changed: [localhost -> 127.0.0.1]
PLAY RECAP ********************************************************************
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 324, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/bin/ansible-playbook", line 268, in main
playbook_cb.on_stats(pb.stats)
File "/usr/lib/pymodules/python2.7/ansible/callbacks.py", line 724, in on_stats
call_callback_module('playbook_on_stats', stats)
File "/usr/lib/pymodules/python2.7/ansible/callbacks.py", line 179, in call_callback_module
method(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/awx/plugins/callback/job_event_callback.py", line 447, in playbook_on_stats
self.terminate_ssh_control_masters()
File "/usr/lib/python2.7/dist-packages/awx/plugins/callback/job_event_callback.py", line 279, in terminate_ssh_control_masters
proc.terminate()
File "/usr/lib/python2.7/dist-packages/psutil/__init__.py", line 904, in terminate
self.send_signal(signal.SIGTERM)
File "/usr/lib/python2.7/dist-packages/psutil/__init__.py", line 173, in wrapper
raise NoSuchProcess(self.pid, self._platform_impl._process_name)
psutil._error.NoSuchProcess: process no longer exists (pid=1024, name='ssh')
```
| main | instance termination fails with ssh error issue type bug report component name module ansible version ansible version ansible configured module search path none configuration this job is running from ansible tower os environment ubuntu summary i have a playbook that launches an instance runs some tasks and later tries to terminate the instance getting inconsistent behaviour on termination sometimes it suceeds other times it fail with an ssh connection error steps to reproduce run a playbook like the following name create datomic instance hosts localhost vars files group vars datomic tasks name ansible create instance count assign public ip no wait yes state present key name ansible region us east image ami instance type large tenancy default group security group vpc subnet id subnet id instance profile name datomic instance tags name datomic register name ansible add to host group add host hostname item private ip groupname launched with items instances name ansible wait for ssh port to come up wait for host item private ip port delay timeout state started with items instances pause seconds name terminate datomic instance hosts localhost connection local tasks name post terminate datomic instance local action module state absent instance ids instance ids region us east retries delay expected results successful termination actual results throws a python error about ssh connection termination task changed play recap traceback most recent call last file usr bin ansible playbook line in sys exit main sys argv file usr bin ansible playbook line in main playbook cb on stats pb stats file usr lib pymodules ansible callbacks py line in on stats call callback module playbook on stats stats file usr lib pymodules ansible callbacks py line in call callback module method args kwargs file usr lib dist packages awx plugins callback job event callback py line in playbook on stats self terminate ssh control masters file usr lib dist packages awx plugins callback job event callback py line in terminate ssh control masters proc terminate file usr lib dist packages psutil init py line in terminate self send signal signal sigterm file usr lib dist packages psutil init py line in wrapper raise nosuchprocess self pid self platform impl process name psutil error nosuchprocess process no longer exists pid name ssh | 1 |
34,222 | 4,894,189,719 | IssuesEvent | 2016-11-19 05:07:29 | ensime/scala-debugger | https://api.github.com/repos/ensime/scala-debugger | opened | LoopingTaskRunner unstable test | bug for test | Rarely happens, but the test should be revised to not have this race condition.
```
[00:08:09] [info] - should stop future execution of tasks *** FAILED *** (625 milliseconds)
[00:08:09] [info] 1 was not equal to 0 (LoopingTaskRunnerSpec.scala:174)
[00:08:09] [info] org.scalatest.exceptions.TestFailedException:
[00:08:09] [info] at org.scalatest.MatchersHelper$.newTestFailedException(MatchersHelper.scala:148)
[00:08:09] [info] at org.scalatest.MatchersHelper$.indicateFailure(MatchersHelper.scala:381)
[00:08:09] [info] at org.scalatest.Matchers$ShouldMethodHelper$.shouldMatcher(Matchers.scala:6735)
[00:08:09] [info] at org.scalatest.Matchers$AnyShouldWrapper.should(Matchers.scala:6772)
[00:08:09] [info] at org.scaladebugger.api.utils.LoopingTaskRunnerSpec$$anonfun$1$$anonfun$apply$mcV$sp$4$$anonfun$apply$mcV$sp$16.apply(LoopingTaskRunnerSpec.scala:174)
[00:08:09] [info] at org.scaladebugger.api.utils.LoopingTaskRunnerSpec$$anonfun$1$$anonfun$apply$mcV$sp$4$$anonfun$apply$mcV$sp$16.apply(LoopingTaskRunnerSpec.scala:149)
[00:08:09] [info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[00:08:09] [info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[00:08:09] [info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[00:08:09] [info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[00:08:09] [info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[00:08:09] [info] at org.scalatest.FunSpecLike$$anon$1.apply(FunSpecLike.scala:456)
[00:08:09] [info] at org.scalatest.Suite$class.withFixture(Suite.scala:1031)
[00:08:09] [info] at test.ParallelMockFunSpec.org$scalamock$scalatest$AbstractMockFactory$$super$withFixture(ParallelMockFunSpec.scala:7)
[00:08:09] [info] at org.scalamock.scalatest.AbstractMockFactory$$anonfun$withFixture$1.apply(AbstractMockFactory.scala:35)
[00:08:09] [info] at org.scalamock.scalatest.AbstractMockFactory$$anonfun$withFixture$1.apply(AbstractMockFactory.scala:34)
``` | 1.0 | LoopingTaskRunner unstable test - Rarely happens, but the test should be revised to not have this race condition.
```
[00:08:09] [info] - should stop future execution of tasks *** FAILED *** (625 milliseconds)
[00:08:09] [info] 1 was not equal to 0 (LoopingTaskRunnerSpec.scala:174)
[00:08:09] [info] org.scalatest.exceptions.TestFailedException:
[00:08:09] [info] at org.scalatest.MatchersHelper$.newTestFailedException(MatchersHelper.scala:148)
[00:08:09] [info] at org.scalatest.MatchersHelper$.indicateFailure(MatchersHelper.scala:381)
[00:08:09] [info] at org.scalatest.Matchers$ShouldMethodHelper$.shouldMatcher(Matchers.scala:6735)
[00:08:09] [info] at org.scalatest.Matchers$AnyShouldWrapper.should(Matchers.scala:6772)
[00:08:09] [info] at org.scaladebugger.api.utils.LoopingTaskRunnerSpec$$anonfun$1$$anonfun$apply$mcV$sp$4$$anonfun$apply$mcV$sp$16.apply(LoopingTaskRunnerSpec.scala:174)
[00:08:09] [info] at org.scaladebugger.api.utils.LoopingTaskRunnerSpec$$anonfun$1$$anonfun$apply$mcV$sp$4$$anonfun$apply$mcV$sp$16.apply(LoopingTaskRunnerSpec.scala:149)
[00:08:09] [info] at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[00:08:09] [info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[00:08:09] [info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[00:08:09] [info] at org.scalatest.Transformer.apply(Transformer.scala:22)
[00:08:09] [info] at org.scalatest.Transformer.apply(Transformer.scala:20)
[00:08:09] [info] at org.scalatest.FunSpecLike$$anon$1.apply(FunSpecLike.scala:456)
[00:08:09] [info] at org.scalatest.Suite$class.withFixture(Suite.scala:1031)
[00:08:09] [info] at test.ParallelMockFunSpec.org$scalamock$scalatest$AbstractMockFactory$$super$withFixture(ParallelMockFunSpec.scala:7)
[00:08:09] [info] at org.scalamock.scalatest.AbstractMockFactory$$anonfun$withFixture$1.apply(AbstractMockFactory.scala:35)
[00:08:09] [info] at org.scalamock.scalatest.AbstractMockFactory$$anonfun$withFixture$1.apply(AbstractMockFactory.scala:34)
``` | non_main | loopingtaskrunner unstable test rarely happens but the test should be revised to not have this race condition should stop future execution of tasks failed milliseconds was not equal to loopingtaskrunnerspec scala org scalatest exceptions testfailedexception at org scalatest matchershelper newtestfailedexception matchershelper scala at org scalatest matchershelper indicatefailure matchershelper scala at org scalatest matchers shouldmethodhelper shouldmatcher matchers scala at org scalatest matchers anyshouldwrapper should matchers scala at org scaladebugger api utils loopingtaskrunnerspec anonfun anonfun apply mcv sp anonfun apply mcv sp apply loopingtaskrunnerspec scala at org scaladebugger api utils loopingtaskrunnerspec anonfun anonfun apply mcv sp anonfun apply mcv sp apply loopingtaskrunnerspec scala at org scalatest transformer anonfun apply apply mcv sp transformer scala at org scalatest outcomeof class outcomeof outcomeof scala at org scalatest outcomeof outcomeof outcomeof scala at org scalatest transformer apply transformer scala at org scalatest transformer apply transformer scala at org scalatest funspeclike anon apply funspeclike scala at org scalatest suite class withfixture suite scala at test parallelmockfunspec org scalamock scalatest abstractmockfactory super withfixture parallelmockfunspec scala at org scalamock scalatest abstractmockfactory anonfun withfixture apply abstractmockfactory scala at org scalamock scalatest abstractmockfactory anonfun withfixture apply abstractmockfactory scala | 0 |
2,475 | 8,639,907,786 | IssuesEvent | 2018-11-23 22:35:47 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | Question on Useage | V1 related (not maintained) | I am trying to capture various RF signals from various household remotes using RTL_SDR for play back using the RPITX software. I am looking to use these signals in a home automation software I am using. I believe I have successfully captured 2 signals as .IQ files and attempted to play them back using RPITX. The signal broadcast as I was able to see it in SDR# but my fan did not turn on as I was hoping. I am not sure if the files need to be converted in any way
I have included links to the files:
Turn fan low: https://drive.google.com/open?id=0B0MQEEqJHGUIcDQtaWpWWEE3OUE
Turn fan off: https://drive.google.com/open?id=0B0MQEEqJHGUIWHlZeTNNNUtpUk0
This is the process I am using:
Run the following command: rtl_sdr -f 303849000 capture.iq and hit the button on the remote. The ceiling fan uses 303.849.000Mhz
Then try to transmith the signal by running this command on the pi: sudo rpitx rpitx -i /home/pi/rpitx/sig/fan_low.iq -f 303849000
Output from rpitx (looks like the wrong frequency is being broadcast)
Warning : Using harmonic 41 memory: 1024 MB processor: Broadcom BCM2836 i2cDevice: /dev/i2c-1 model: Model B Pi 2 manufacturer: Embest pcb revision: 1 warranty void: no revision: a21041 peripheral base: 0x3f000000 Jessie Using mbox device /dev/vcio. 3616000 Size NUM PAGES 883 PAGE_SIZE 4096 MASH 1 Freq PLL# 6 Calibrate : ppm=0 DMA 167ns:1503ns WaitNano=20833 F1=361486188.332892 TuneFrequency 361509780.765417 F2=361518093.5 56929 Initial Resolution(Hz)=31905.224036 ResolutionPWMF 257.300194 NbStep=124 D ELAYStep=9 ****** STARTING TRANSMIT ******** END OF PiTx | True | Question on Useage - I am trying to capture various RF signals from various household remotes using RTL_SDR for play back using the RPITX software. I am looking to use these signals in a home automation software I am using. I believe I have successfully captured 2 signals as .IQ files and attempted to play them back using RPITX. The signal broadcast as I was able to see it in SDR# but my fan did not turn on as I was hoping. I am not sure if the files need to be converted in any way
I have included links to the files:
Turn fan low: https://drive.google.com/open?id=0B0MQEEqJHGUIcDQtaWpWWEE3OUE
Turn fan off: https://drive.google.com/open?id=0B0MQEEqJHGUIWHlZeTNNNUtpUk0
This is the process I am using:
Run the following command: rtl_sdr -f 303849000 capture.iq and hit the button on the remote. The ceiling fan uses 303.849.000Mhz
Then try to transmith the signal by running this command on the pi: sudo rpitx rpitx -i /home/pi/rpitx/sig/fan_low.iq -f 303849000
Output from rpitx (looks like the wrong frequency is being broadcast)
Warning : Using harmonic 41 memory: 1024 MB processor: Broadcom BCM2836 i2cDevice: /dev/i2c-1 model: Model B Pi 2 manufacturer: Embest pcb revision: 1 warranty void: no revision: a21041 peripheral base: 0x3f000000 Jessie Using mbox device /dev/vcio. 3616000 Size NUM PAGES 883 PAGE_SIZE 4096 MASH 1 Freq PLL# 6 Calibrate : ppm=0 DMA 167ns:1503ns WaitNano=20833 F1=361486188.332892 TuneFrequency 361509780.765417 F2=361518093.5 56929 Initial Resolution(Hz)=31905.224036 ResolutionPWMF 257.300194 NbStep=124 D ELAYStep=9 ****** STARTING TRANSMIT ******** END OF PiTx | main | question on useage i am trying to capture various rf signals from various household remotes using rtl sdr for play back using the rpitx software i am looking to use these signals in a home automation software i am using i believe i have successfully captured signals as iq files and attempted to play them back using rpitx the signal broadcast as i was able to see it in sdr but my fan did not turn on as i was hoping i am not sure if the files need to be converted in any way i have included links to the files turn fan low turn fan off this is the process i am using run the following command rtl sdr f capture iq and hit the button on the remote the ceiling fan uses then try to transmith the signal by running this command on the pi sudo rpitx rpitx i home pi rpitx sig fan low iq f output from rpitx looks like the wrong frequency is being broadcast warning using harmonic memory mb processor broadcom dev model model b pi manufacturer embest pcb revision warranty void no revision peripheral base jessie using mbox device dev vcio size num pages page size mash freq pll calibrate ppm dma waitnano tunefrequency initial resolution hz resolutionpwmf nbstep d elaystep starting transmit end of pitx | 1 |
3,096 | 11,756,224,709 | IssuesEvent | 2020-03-13 11:06:07 | PointCloudLibrary/pcl | https://api.github.com/repos/PointCloudLibrary/pcl | closed | [MacOS CI] Issue with cmath, boost and Apple | kind: compile error kind: todo module: ci needs: maintainer feedback | <!--- WARNING: This is an issue tracker. Before opening a new issue make sure you read https://github.com/PointCloudLibrary/pcl/blob/master/CONTRIBUTING.md#using-the-issue-tracker. -->
<!--- Provide a general summary of the issue in the Title above -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Operating System and version: MacOS 10.14 (CI)
* Compiler: AppleClang 11
* PCL Version: HEAD
## Context
MacOS CI is red
## Current Behavior
```
/usr/local/include/boost/math/special_functions/fpclassify.hpp:552:17: error: no member named 'isnan' in namespace 'std'; did you mean simply 'isnan'?
return (std::isnan)(x);
^~~~~
/usr/local/include/boost/math/special_functions/math_fwd.hpp:885:9: note: 'isnan' declared here
bool isnan BOOST_NO_MACRO_EXPAND(T t);
^
``` | True | [MacOS CI] Issue with cmath, boost and Apple - <!--- WARNING: This is an issue tracker. Before opening a new issue make sure you read https://github.com/PointCloudLibrary/pcl/blob/master/CONTRIBUTING.md#using-the-issue-tracker. -->
<!--- Provide a general summary of the issue in the Title above -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Operating System and version: MacOS 10.14 (CI)
* Compiler: AppleClang 11
* PCL Version: HEAD
## Context
MacOS CI is red
## Current Behavior
```
/usr/local/include/boost/math/special_functions/fpclassify.hpp:552:17: error: no member named 'isnan' in namespace 'std'; did you mean simply 'isnan'?
return (std::isnan)(x);
^~~~~
/usr/local/include/boost/math/special_functions/math_fwd.hpp:885:9: note: 'isnan' declared here
bool isnan BOOST_NO_MACRO_EXPAND(T t);
^
``` | main | issue with cmath boost and apple your environment operating system and version macos ci compiler appleclang pcl version head context macos ci is red current behavior usr local include boost math special functions fpclassify hpp error no member named isnan in namespace std did you mean simply isnan return std isnan x usr local include boost math special functions math fwd hpp note isnan declared here bool isnan boost no macro expand t t | 1 |
199,383 | 6,988,965,618 | IssuesEvent | 2017-12-14 14:46:49 | Rsl1122/Plan-PlayerAnalytics | https://api.github.com/repos/Rsl1122/Plan-PlayerAnalytics | closed | NPE main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.isSensibleCommand(RegisterCommandFilter.java:72) | Enhancement Priority: MEDIUM status: Done | The following error is displayed when trying to use some RedProtect commands such as /rp addleader \<player\>, and the player who used the command gets "An internal error occurred while attempting to perform this command":
```
11.12 00:56:34 [Server] INFO ... 5 more
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.SystemUtils.a(SourceFile:45) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PlayerConnectionUtils$1.run(SourceFile:13) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PacketPlayInChat.a(PacketPlayInChat.java:5) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PacketPlayInChat.a(PacketPlayInChat.java:45) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PlayerConnection.a(PlayerConnection.java:1203) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PlayerConnection.handleCommand(PlayerConnection.java:1404) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at java.util.logging.Logger.log(Logger.java:875) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at java.util.logging.Logger.doLog(Logger.java:765) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at java.util.logging.Logger.log(Logger.java:738) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at org.bukkit.craftbukkit.v1_10_R1.util.ForwardLogHandler.publish(ForwardLogHandler.java:33) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.spi.AbstractLogger.error(AbstractLogger.java:609) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.Logger.log(Logger.java:110) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:367) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:400) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.filter.AbstractFilterable.isFiltered(AbstractFilterable.java:124) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.filter.CompositeFilter.filter(CompositeFilter.java:231) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.filter(RegisterCommandFilter.java:35) ~[Plan-4.1.3.1.jar:2.1.1]
11.12 00:56:34 [Server] INFO at main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.validateMessage(RegisterCommandFilter.java:62) ~[Plan-4.1.3.1.jar:2.1.1]
11.12 00:56:34 [Server] INFO at main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.validateMessage(RegisterCommandFilter.java:66) ~[Plan-4.1.3.1.jar:2.1.1]
11.12 00:56:34 [Server] INFO at main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.isSensibleCommand(RegisterCommandFilter.java:72) ~[Plan-4.1.3.1.jar:2.1.1]
11.12 00:56:34 [Server] INFO Caused by: java.lang.NullPointerException
11.12 00:56:34 [Server] INFO at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.MinecraftServer.run(MinecraftServer.java:639) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.MinecraftServer.C(MinecraftServer.java:740) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.DedicatedServer.D(DedicatedServer.java:404) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.MinecraftServer.D(MinecraftServer.java:808) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.SystemUtils.a(SourceFile:46) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO java.util.concurrent.ExecutionException: java.lang.NullPointerException
11.12 00:56:34 [Server] FATAL Error executing task
```
### Server Information
**Plan Version:** 4.1.3.1
**Server:** Paper 1.10.2-R0.1-SNAPSHOT git-TacoSpigot-7af2abbe (MC: 1.10.2)
**Database:** MySQL schema v13
### Logged Errors
```
java.io.FileNotFoundException: File not found inside jar: web/error.html
main.java.com.djrapitops.plan.utilities.file.FileUtil.lines(FileUtil.java:83)
main.java.com.djrapitops.plan.utilities.file.FileUtil.lines(FileUtil.java:43)
main.java.com.djrapitops.plan.utilities.file.FileUtil.getStringFromResource(FileUtil.java:28)
main.java.com.djrapitops.plan.systems.webserver.response.ErrorResponse.(ErrorResponse.java:29)
main.java.com.djrapitops.plan.systems.webserver.response.NotFoundResponse.(NotFoundResponse.java:18)
main.java.com.djrapitops.plan.systems.info.BukkitInformationManager.lambda$cachePlayer$0(BukkitInformationManager.java:122)
main.java.com.djrapitops.plan.systems.webserver.PageCache.cachePage(PageCache.java:96)
main.java.com.djrapitops.plan.systems.info.BukkitInformationManager.cachePlayer(BukkitInformationManager.java:117)
main.java.com.djrapitops.plan.systems.cache.SessionCache.cacheSession(SessionCache.java:34)
main.java.com.djrapitops.plan.systems.listeners.PlanPlayerListener.onPlayerJoin(PlanPlayerListener.java:108)
com.destroystokyo.paper.event.executor.asm.generated.GeneratedEventExecutor938.execute(Unknown Source)
org.bukkit.plugin.EventExecutor$1.execute(EventExecutor.java:44)
co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:78)
org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:62)
org.bukkit.plugin.SimplePluginManager.fireEvent(SimplePluginManager.java:517)
org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:502)
net.minecraft.server.v1_10_R1.PlayerList.onPlayerJoin(PlayerList.java:357)
net.minecraft.server.v1_10_R1.PlayerList.a(PlayerList.java:177)
net.minecraft.server.v1_10_R1.LoginListener.b(LoginListener.java:144)
net.minecraft.server.v1_10_R1.LoginListener.E_(LoginListener.java:54)
net.minecraft.server.v1_10_R1.NetworkManager.a(NetworkManager.java:233)
net.minecraft.server.v1_10_R1.ServerConnection.c(ServerConnection.java:152)
net.minecraft.server.v1_10_R1.MinecraftServer.D(MinecraftServer.java:903)
net.minecraft.server.v1_10_R1.DedicatedServer.D(DedicatedServer.java:404)
net.minecraft.server.v1_10_R1.MinecraftServer.C(MinecraftServer.java:740)
net.minecraft.server.v1_10_R1.MinecraftServer.run(MinecraftServer.java:639)
java.lang.Thread.run(Thread.java:745)
```
- [ ] Fixed
| 1.0 | NPE main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.isSensibleCommand(RegisterCommandFilter.java:72) - The following error is displayed when trying to use some RedProtect commands such as /rp addleader \<player\>, and the player who used the command gets "An internal error occurred while attempting to perform this command":
```
11.12 00:56:34 [Server] INFO ... 5 more
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.SystemUtils.a(SourceFile:45) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PlayerConnectionUtils$1.run(SourceFile:13) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PacketPlayInChat.a(PacketPlayInChat.java:5) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PacketPlayInChat.a(PacketPlayInChat.java:45) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PlayerConnection.a(PlayerConnection.java:1203) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.PlayerConnection.handleCommand(PlayerConnection.java:1404) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at java.util.logging.Logger.log(Logger.java:875) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at java.util.logging.Logger.doLog(Logger.java:765) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at java.util.logging.Logger.log(Logger.java:738) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at org.bukkit.craftbukkit.v1_10_R1.util.ForwardLogHandler.publish(ForwardLogHandler.java:33) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.spi.AbstractLogger.error(AbstractLogger.java:609) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.Logger.log(Logger.java:110) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:367) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:400) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.filter.AbstractFilterable.isFiltered(AbstractFilterable.java:124) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at org.apache.logging.log4j.core.filter.CompositeFilter.filter(CompositeFilter.java:231) ~[TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.filter(RegisterCommandFilter.java:35) ~[Plan-4.1.3.1.jar:2.1.1]
11.12 00:56:34 [Server] INFO at main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.validateMessage(RegisterCommandFilter.java:62) ~[Plan-4.1.3.1.jar:2.1.1]
11.12 00:56:34 [Server] INFO at main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.validateMessage(RegisterCommandFilter.java:66) ~[Plan-4.1.3.1.jar:2.1.1]
11.12 00:56:34 [Server] INFO at main.java.com.djrapitops.plan.command.commands.RegisterCommandFilter.isSensibleCommand(RegisterCommandFilter.java:72) ~[Plan-4.1.3.1.jar:2.1.1]
11.12 00:56:34 [Server] INFO Caused by: java.lang.NullPointerException
11.12 00:56:34 [Server] INFO at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.MinecraftServer.run(MinecraftServer.java:639) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.MinecraftServer.C(MinecraftServer.java:740) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.DedicatedServer.D(DedicatedServer.java:404) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.MinecraftServer.D(MinecraftServer.java:808) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at net.minecraft.server.v1_10_R1.SystemUtils.a(SourceFile:46) [TacoSpigot%201.10.jar:git-TacoSpigot-7af2abbe]
11.12 00:56:34 [Server] INFO at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_101]
11.12 00:56:34 [Server] INFO java.util.concurrent.ExecutionException: java.lang.NullPointerException
11.12 00:56:34 [Server] FATAL Error executing task
```
### Server Information
**Plan Version:** 4.1.3.1
**Server:** Paper 1.10.2-R0.1-SNAPSHOT git-TacoSpigot-7af2abbe (MC: 1.10.2)
**Database:** MySQL schema v13
### Logged Errors
```
java.io.FileNotFoundException: File not found inside jar: web/error.html
main.java.com.djrapitops.plan.utilities.file.FileUtil.lines(FileUtil.java:83)
main.java.com.djrapitops.plan.utilities.file.FileUtil.lines(FileUtil.java:43)
main.java.com.djrapitops.plan.utilities.file.FileUtil.getStringFromResource(FileUtil.java:28)
main.java.com.djrapitops.plan.systems.webserver.response.ErrorResponse.(ErrorResponse.java:29)
main.java.com.djrapitops.plan.systems.webserver.response.NotFoundResponse.(NotFoundResponse.java:18)
main.java.com.djrapitops.plan.systems.info.BukkitInformationManager.lambda$cachePlayer$0(BukkitInformationManager.java:122)
main.java.com.djrapitops.plan.systems.webserver.PageCache.cachePage(PageCache.java:96)
main.java.com.djrapitops.plan.systems.info.BukkitInformationManager.cachePlayer(BukkitInformationManager.java:117)
main.java.com.djrapitops.plan.systems.cache.SessionCache.cacheSession(SessionCache.java:34)
main.java.com.djrapitops.plan.systems.listeners.PlanPlayerListener.onPlayerJoin(PlanPlayerListener.java:108)
com.destroystokyo.paper.event.executor.asm.generated.GeneratedEventExecutor938.execute(Unknown Source)
org.bukkit.plugin.EventExecutor$1.execute(EventExecutor.java:44)
co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:78)
org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:62)
org.bukkit.plugin.SimplePluginManager.fireEvent(SimplePluginManager.java:517)
org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:502)
net.minecraft.server.v1_10_R1.PlayerList.onPlayerJoin(PlayerList.java:357)
net.minecraft.server.v1_10_R1.PlayerList.a(PlayerList.java:177)
net.minecraft.server.v1_10_R1.LoginListener.b(LoginListener.java:144)
net.minecraft.server.v1_10_R1.LoginListener.E_(LoginListener.java:54)
net.minecraft.server.v1_10_R1.NetworkManager.a(NetworkManager.java:233)
net.minecraft.server.v1_10_R1.ServerConnection.c(ServerConnection.java:152)
net.minecraft.server.v1_10_R1.MinecraftServer.D(MinecraftServer.java:903)
net.minecraft.server.v1_10_R1.DedicatedServer.D(DedicatedServer.java:404)
net.minecraft.server.v1_10_R1.MinecraftServer.C(MinecraftServer.java:740)
net.minecraft.server.v1_10_R1.MinecraftServer.run(MinecraftServer.java:639)
java.lang.Thread.run(Thread.java:745)
```
- [ ] Fixed
| non_main | npe main java com djrapitops plan command commands registercommandfilter issensiblecommand registercommandfilter java the following error is displayed when trying to use some redprotect commands such as rp addleader and the player who used the command gets an internal error occurred while attempting to perform this command info more info at net minecraft server systemutils a sourcefile info at java util concurrent futuretask run futuretask java info at java util concurrent executors runnableadapter call executors java info at net minecraft server playerconnectionutils run sourcefile info at net minecraft server packetplayinchat a packetplayinchat java info at net minecraft server packetplayinchat a packetplayinchat java info at net minecraft server playerconnection a playerconnection java info at net minecraft server playerconnection handlecommand playerconnection java info at java util logging logger log logger java info at java util logging logger dolog logger java info at java util logging logger log logger java info at org bukkit craftbukkit util forwardloghandler publish forwardloghandler java info at org apache logging spi abstractlogger error abstractlogger java info at org apache logging core logger log logger java info at org apache logging core config loggerconfig log loggerconfig java info at org apache logging core config loggerconfig log loggerconfig java info at org apache logging core filter abstractfilterable isfiltered abstractfilterable java info at org apache logging core filter compositefilter filter compositefilter java info at main java com djrapitops plan command commands registercommandfilter filter registercommandfilter java info at main java com djrapitops plan command commands registercommandfilter validatemessage registercommandfilter java info at main java com djrapitops plan command commands registercommandfilter validatemessage registercommandfilter java info at main java com djrapitops plan command commands registercommandfilter issensiblecommand registercommandfilter java info caused by java lang nullpointerexception info at java lang thread run thread java info at net minecraft server minecraftserver run minecraftserver java info at net minecraft server minecraftserver c minecraftserver java info at net minecraft server dedicatedserver d dedicatedserver java info at net minecraft server minecraftserver d minecraftserver java info at net minecraft server systemutils a sourcefile info at java util concurrent futuretask get futuretask java info at java util concurrent futuretask report futuretask java info java util concurrent executionexception java lang nullpointerexception fatal error executing task server information plan version server paper snapshot git tacospigot mc database mysql schema logged errors java io filenotfoundexception file not found inside jar web error html main java com djrapitops plan utilities file fileutil lines fileutil java main java com djrapitops plan utilities file fileutil lines fileutil java main java com djrapitops plan utilities file fileutil getstringfromresource fileutil java main java com djrapitops plan systems webserver response errorresponse errorresponse java main java com djrapitops plan systems webserver response notfoundresponse notfoundresponse java main java com djrapitops plan systems info bukkitinformationmanager lambda cacheplayer bukkitinformationmanager java main java com djrapitops plan systems webserver pagecache cachepage pagecache java main java com djrapitops plan systems info bukkitinformationmanager cacheplayer bukkitinformationmanager java main java com djrapitops plan systems cache sessioncache cachesession sessioncache java main java com djrapitops plan systems listeners planplayerlistener onplayerjoin planplayerlistener java com destroystokyo paper event executor asm generated execute unknown source org bukkit plugin eventexecutor execute eventexecutor java co aikar timings timedeventexecutor execute timedeventexecutor java org bukkit plugin registeredlistener callevent registeredlistener java org bukkit plugin simplepluginmanager fireevent simplepluginmanager java org bukkit plugin simplepluginmanager callevent simplepluginmanager java net minecraft server playerlist onplayerjoin playerlist java net minecraft server playerlist a playerlist java net minecraft server loginlistener b loginlistener java net minecraft server loginlistener e loginlistener java net minecraft server networkmanager a networkmanager java net minecraft server serverconnection c serverconnection java net minecraft server minecraftserver d minecraftserver java net minecraft server dedicatedserver d dedicatedserver java net minecraft server minecraftserver c minecraftserver java net minecraft server minecraftserver run minecraftserver java java lang thread run thread java fixed | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.