Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
2,173
7,612,951,592
IssuesEvent
2018-05-01 19:24:55
walbourn/contentexporter
https://api.github.com/repos/walbourn/contentexporter
closed
Remove VS 2013 compiler support
maintainence
The April 2018 releases of DirectXTex, DirectXMesh, and UVAtlas are the last to support VS 2013. See [this blog post](https://blogs.msdn.microsoft.com/chuckw/2018/04/30/github-nuget-and-vso/)
True
Remove VS 2013 compiler support - The April 2018 releases of DirectXTex, DirectXMesh, and UVAtlas are the last to support VS 2013. See [this blog post](https://blogs.msdn.microsoft.com/chuckw/2018/04/30/github-nuget-and-vso/)
main
remove vs compiler support the april releases of directxtex directxmesh and uvatlas are the last to support vs see
1
2,907
10,327,619,515
IssuesEvent
2019-09-02 07:29:10
varenc/homebrew-ffmpeg
https://api.github.com/repos/varenc/homebrew-ffmpeg
closed
Setup CI
maintainer-feedback
We should use Travis to make CI automated builds of the formula for: - macOS 10.12–10.14 - Linux (e.g., Ubuntu LTS) A good starting point for a `.travis.yml` could be this one: https://github.com/denji/homebrew-nginx/blob/master/.travis.yml Some other options are visible here: https://github.com/petere/homebrew-postgresql/blob/master/.travis.yml Not a big expert when it comes to Travis, so I'll have to do some offline testing before.
True
Setup CI - We should use Travis to make CI automated builds of the formula for: - macOS 10.12–10.14 - Linux (e.g., Ubuntu LTS) A good starting point for a `.travis.yml` could be this one: https://github.com/denji/homebrew-nginx/blob/master/.travis.yml Some other options are visible here: https://github.com/petere/homebrew-postgresql/blob/master/.travis.yml Not a big expert when it comes to Travis, so I'll have to do some offline testing before.
main
setup ci we should use travis to make ci automated builds of the formula for macos – linux e g ubuntu lts a good starting point for a travis yml could be this one some other options are visible here not a big expert when it comes to travis so i ll have to do some offline testing before
1
787,794
27,731,294,565
IssuesEvent
2023-03-15 08:15:08
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.instagram.com - see bug description
priority-critical browser-fenix engine-gecko android13
<!-- @browser: Firefox Mobile 111.0 --> <!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:109.0) Gecko/111.0 Firefox/111.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/119526 --> <!-- @extra_labels: browser-fenix --> **URL**: https://www.instagram.com/accounts/login/ **Browser / Version**: Firefox Mobile 111.0 **Operating System**: Android 13 **Tested Another Browser**: Yes Edge **Problem type**: Something else **Description**: Open an app does not function. only giving browsers as an option. (might be an Instagram issue but figured I would let you guys know) keep up the hard work! Firefox forever! **Steps to Reproduce**: Clicked on the view on Instagram on the page. Mobile Chrome and edge both open Instagram. Firefox opens Instagram.com. using the opening app does not see Instagram as an option only the other web browsers. Instagram has all supported links enabled. Thank y'all for your hard work! As long as Firefox mobile has extension support Firefox is the way to go! <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/3/7018cc1a-9771-4d60-8518-f8136c05dd51.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230302185836</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2023/3/d5344739-4db1-455e-a576-e0fe7bee731a) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.instagram.com - see bug description - <!-- @browser: Firefox Mobile 111.0 --> <!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:109.0) Gecko/111.0 Firefox/111.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/119526 --> <!-- @extra_labels: browser-fenix --> **URL**: https://www.instagram.com/accounts/login/ **Browser / Version**: Firefox Mobile 111.0 **Operating System**: Android 13 **Tested Another Browser**: Yes Edge **Problem type**: Something else **Description**: Open an app does not function. only giving browsers as an option. (might be an Instagram issue but figured I would let you guys know) keep up the hard work! Firefox forever! **Steps to Reproduce**: Clicked on the view on Instagram on the page. Mobile Chrome and edge both open Instagram. Firefox opens Instagram.com. using the opening app does not see Instagram as an option only the other web browsers. Instagram has all supported links enabled. Thank y'all for your hard work! As long as Firefox mobile has extension support Firefox is the way to go! <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/3/7018cc1a-9771-4d60-8518-f8136c05dd51.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230302185836</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2023/3/d5344739-4db1-455e-a576-e0fe7bee731a) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_main
see bug description url browser version firefox mobile operating system android tested another browser yes edge problem type something else description open an app does not function only giving browsers as an option might be an instagram issue but figured i would let you guys know keep up the hard work firefox forever steps to reproduce clicked on the view on instagram on the page mobile chrome and edge both open instagram firefox opens instagram com using the opening app does not see instagram as an option only the other web browsers instagram has all supported links enabled thank y all for your hard work as long as firefox mobile has extension support firefox is the way to go view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
750,502
26,204,287,728
IssuesEvent
2023-01-03 20:45:29
grpc/grpc
https://api.github.com/repos/grpc/grpc
closed
Add information about error to grpc_event when processing batch fails
kind/enhancement lang/core priority/P2 untriaged
### Is your feature request related to a problem? Please describe. I am using `grpc_call_start_batch` and `grpc_completion_queue_next` to communicate with server. In case if batch operation fail before sending to server, for example because the size of request body is greater than limit, there is no way to get this error in caller code (except enable tracing). The grpc_event contains only flag, was operation successful or not. ### Describe the solution you'd like It will be very helpful if there was option to get actual error instead of boolean flag in grpc_event, like a pointer to error, that user should clean by itself, or via callback where user should take ownership for the error. At least just the grpc_status instead of bool flag succes will be more useful. ### Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. ### Additional context Add any other context about the feature request here.
1.0
Add information about error to grpc_event when processing batch fails - ### Is your feature request related to a problem? Please describe. I am using `grpc_call_start_batch` and `grpc_completion_queue_next` to communicate with server. In case if batch operation fail before sending to server, for example because the size of request body is greater than limit, there is no way to get this error in caller code (except enable tracing). The grpc_event contains only flag, was operation successful or not. ### Describe the solution you'd like It will be very helpful if there was option to get actual error instead of boolean flag in grpc_event, like a pointer to error, that user should clean by itself, or via callback where user should take ownership for the error. At least just the grpc_status instead of bool flag succes will be more useful. ### Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. ### Additional context Add any other context about the feature request here.
non_main
add information about error to grpc event when processing batch fails is your feature request related to a problem please describe i am using grpc call start batch and grpc completion queue next to communicate with server in case if batch operation fail before sending to server for example because the size of request body is greater than limit there is no way to get this error in caller code except enable tracing the grpc event contains only flag was operation successful or not describe the solution you d like it will be very helpful if there was option to get actual error instead of boolean flag in grpc event like a pointer to error that user should clean by itself or via callback where user should take ownership for the error at least just the grpc status instead of bool flag succes will be more useful describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context about the feature request here
0
80,212
10,164,978,753
IssuesEvent
2019-08-07 12:58:44
prometheus/alertmanager
https://api.github.com/repos/prometheus/alertmanager
closed
repeat_interval set 99999h, but in fact it send alert every 120h
component/notify kind/documentation
**What did you do?** I don't want it send alert repeatly , so I set "repeat_interval: 99999h", but it will send alert again after 120h. **What did you expect to see?** I want to know whether there is a maximum value of repeat_interval or not. **What did you see instead? Under which circumstances?** **Environment** * System information: insert output of `uname -srm` here * Alertmanager version: v0.15.2 * Prometheus version: v2.4.3 * Alertmanager configuration file: ``` global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: '8h' receiver: 'wechat_platform' group_by: [alertname, instance] routes: - match: source: platform receiver: wechat_platform repeat_interval: 99999h - match: source: business receiver: wechat_business repeat_interval: 99999h continue: true - match: telephone: true receiver: webhook repeat_interval: 99999h templates: - '/etc/alertmanager/template/*.tmpl' receivers: - name: 'wechat_platform' wechat_configs: - send_resolved: false corp_id: xx to_party: xx agent_id: xx api_secret: xx message: '{{ template "platform.message" . }}' - name: 'wechat_business' wechat_configs: - send_resolved: false corp_id: xx to_party: xx agent_id: xx api_secret: xx message: '{{ template "business.message" . }}' - name: 'webhook' webhook_configs: - send_resolved: false url: 'xxx' ``` * Prometheus configuration file: ``` insert configuration here (if relevant to the issue) ``` * Logs: ``` insert Prometheus and Alertmanager logs relevant to the issue here ```
1.0
repeat_interval set 99999h, but in fact it send alert every 120h - **What did you do?** I don't want it send alert repeatly , so I set "repeat_interval: 99999h", but it will send alert again after 120h. **What did you expect to see?** I want to know whether there is a maximum value of repeat_interval or not. **What did you see instead? Under which circumstances?** **Environment** * System information: insert output of `uname -srm` here * Alertmanager version: v0.15.2 * Prometheus version: v2.4.3 * Alertmanager configuration file: ``` global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: '8h' receiver: 'wechat_platform' group_by: [alertname, instance] routes: - match: source: platform receiver: wechat_platform repeat_interval: 99999h - match: source: business receiver: wechat_business repeat_interval: 99999h continue: true - match: telephone: true receiver: webhook repeat_interval: 99999h templates: - '/etc/alertmanager/template/*.tmpl' receivers: - name: 'wechat_platform' wechat_configs: - send_resolved: false corp_id: xx to_party: xx agent_id: xx api_secret: xx message: '{{ template "platform.message" . }}' - name: 'wechat_business' wechat_configs: - send_resolved: false corp_id: xx to_party: xx agent_id: xx api_secret: xx message: '{{ template "business.message" . }}' - name: 'webhook' webhook_configs: - send_resolved: false url: 'xxx' ``` * Prometheus configuration file: ``` insert configuration here (if relevant to the issue) ``` * Logs: ``` insert Prometheus and Alertmanager logs relevant to the issue here ```
non_main
repeat interval set but in fact it send alert every what did you do i don t want it send alert repeatly so i set repeat interval but it will send alert again after what did you expect to see i want to know whether there is a maximum value of repeat interval or not what did you see instead under which circumstances environment system information insert output of uname srm here alertmanager version prometheus version alertmanager configuration file global resolve timeout route group wait group interval repeat interval receiver wechat platform group by routes match source platform receiver wechat platform repeat interval match source business receiver wechat business repeat interval continue true match telephone true receiver webhook repeat interval templates etc alertmanager template tmpl receivers name wechat platform wechat configs send resolved false corp id xx to party xx agent id xx api secret xx message template platform message name wechat business wechat configs send resolved false corp id xx to party xx agent id xx api secret xx message template business message name webhook webhook configs send resolved false url xxx prometheus configuration file insert configuration here if relevant to the issue logs insert prometheus and alertmanager logs relevant to the issue here
0
5,848
31,146,436,597
IssuesEvent
2023-08-16 06:51:28
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
`/datum/unit_test/modify_fantasy_variable` not included in `_unit_tests.dm`
Maintainability/Hinders improvements
## Reproduction: https://github.com/tgstation/tgstation/blob/master/code/modules/unit_tests/modify_fantasy_variable.dm is not included in https://github.com/tgstation/tgstation/blob/master/code/modules/unit_tests/_unit_tests.dm so it doesn't run.
True
`/datum/unit_test/modify_fantasy_variable` not included in `_unit_tests.dm` - ## Reproduction: https://github.com/tgstation/tgstation/blob/master/code/modules/unit_tests/modify_fantasy_variable.dm is not included in https://github.com/tgstation/tgstation/blob/master/code/modules/unit_tests/_unit_tests.dm so it doesn't run.
main
datum unit test modify fantasy variable not included in unit tests dm reproduction is not included in so it doesn t run
1
1,131
4,998,415,631
IssuesEvent
2016-12-09 19:47:08
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ini_file documentation falsely states the default for "create"option is yes
affects_2.2 docs_report waiting_on_maintainer
##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ini_file core module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION nothing special ##### OS / ENVIRONMENT N/A ##### SUMMARY http://docs.ansible.com/ansible/ini_file_module.html states the default for "create" option is yes. This is false, the default is no, as in the file: /usr/lib/python2.7/dist-packages/ansible/modules/core/files/ini_file.py ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS The documentation should tell the real default OR the default should be yes. (It is a more reasonable default in my opinion) ##### ACTUAL RESULTS
True
ini_file documentation falsely states the default for "create"option is yes - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME ini_file core module ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION nothing special ##### OS / ENVIRONMENT N/A ##### SUMMARY http://docs.ansible.com/ansible/ini_file_module.html states the default for "create" option is yes. This is false, the default is no, as in the file: /usr/lib/python2.7/dist-packages/ansible/modules/core/files/ini_file.py ##### STEPS TO REPRODUCE ##### EXPECTED RESULTS The documentation should tell the real default OR the default should be yes. (It is a more reasonable default in my opinion) ##### ACTUAL RESULTS
main
ini file documentation falsely states the default for create option is yes issue type documentation report component name ini file core module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration nothing special os environment n a summary states the default for create option is yes this is false the default is no as in the file usr lib dist packages ansible modules core files ini file py steps to reproduce expected results the documentation should tell the real default or the default should be yes it is a more reasonable default in my opinion actual results
1
5,822
30,822,589,404
IssuesEvent
2023-08-01 17:26:50
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
opened
Detecting custom test_suite macros is broken
type: bug awaiting-maintainer
### Description of the bug: Custom test_suite macros like contrib_rules_jvm's [java_test_suite](https://github.com/bazel-contrib/rules_jvm#java_test_suite) should be runnable from BUILD files like standard test_suite rules. However, it appears the heuristics for detecting custom test_suites is broken: https://github.com/bazelbuild/intellij/blob/935db0a69a66a67a9845e3f11c92d72676e4399e/base/src/com/google/idea/blaze/base/model/primitives/Kind.java#L203 Instead of checking if a rule ends in `test_suite`, it checks if it exactly matches "test_suite" or ends in "test_suites". ### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. Write a BUILD file in IntelliJ with these contents: ```starlark load("@contrib_rules_jvm//java:defs.bzl", "java_test_suite") java_test_suite( name = "tests", srcs = glob(["*Test.java"]), ) ``` A green arrow should appear in the margin, but doesn't. ### Which Intellij IDE are you using? Please provide the specific version. Build #IU-232.8660.185, built on July 25, 2023 ### What programming languages and tools are you using? Please provide specific versions. Java ### What Bazel plugin version are you using? 2023.07.21.0.1-api-version-232 ### Have you found anything relevant by searching the web? _No response_ ### Any other information, logs, or outputs that you want to share? _No response_
True
Detecting custom test_suite macros is broken - ### Description of the bug: Custom test_suite macros like contrib_rules_jvm's [java_test_suite](https://github.com/bazel-contrib/rules_jvm#java_test_suite) should be runnable from BUILD files like standard test_suite rules. However, it appears the heuristics for detecting custom test_suites is broken: https://github.com/bazelbuild/intellij/blob/935db0a69a66a67a9845e3f11c92d72676e4399e/base/src/com/google/idea/blaze/base/model/primitives/Kind.java#L203 Instead of checking if a rule ends in `test_suite`, it checks if it exactly matches "test_suite" or ends in "test_suites". ### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. Write a BUILD file in IntelliJ with these contents: ```starlark load("@contrib_rules_jvm//java:defs.bzl", "java_test_suite") java_test_suite( name = "tests", srcs = glob(["*Test.java"]), ) ``` A green arrow should appear in the margin, but doesn't. ### Which Intellij IDE are you using? Please provide the specific version. Build #IU-232.8660.185, built on July 25, 2023 ### What programming languages and tools are you using? Please provide specific versions. Java ### What Bazel plugin version are you using? 2023.07.21.0.1-api-version-232 ### Have you found anything relevant by searching the web? _No response_ ### Any other information, logs, or outputs that you want to share? _No response_
main
detecting custom test suite macros is broken description of the bug custom test suite macros like contrib rules jvm s should be runnable from build files like standard test suite rules however it appears the heuristics for detecting custom test suites is broken instead of checking if a rule ends in test suite it checks if it exactly matches test suite or ends in test suites what s the simplest easiest way to reproduce this bug please provide a minimal example if possible write a build file in intellij with these contents starlark load contrib rules jvm java defs bzl java test suite java test suite name tests srcs glob a green arrow should appear in the margin but doesn t which intellij ide are you using please provide the specific version build iu built on july what programming languages and tools are you using please provide specific versions java what bazel plugin version are you using api version have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
1
293,376
25,288,065,056
IssuesEvent
2022-11-16 21:08:03
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: gorm failed
C-test-failure O-robot O-roachtest T-sql-experience branch-release-22.1
roachtest.gorm [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7377864&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7377864&tab=artifacts#/gorm) on release-22.1 @ [22fa382362ff4ec680d82eef28dcbbaa932942ae](https://github.com/cockroachdb/cockroach/commits/22fa382362ff4ec680d82eef28dcbbaa932942ae): ``` The test failed on branch=release-22.1, cloud=gce: test artifacts and logs in: /artifacts/gorm/run_1 orm_helpers.go:193,orm_helpers.go:119,java_helpers.go:220,gorm.go:129,test_runner.go:883: Tests run on Cockroach v22.1.10-75-g22fa382362 Tests run against gorm v1.24.1 1 Total Tests Run 0 tests passed 1 test failed 0 tests skipped 0 tests ignored 0 tests passed unexpectedly 1 test failed unexpectedly 0 tests expected failed but skipped 0 tests expected failed but not run --- --- FAIL: tests.[build failed] - unknown (unexpected) For a full summary look at the gorm artifacts An updated blocklist (gormBlocklist) is available in the artifacts' gorm log ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #89729 roachtest: gorm failed [C-test-failure O-roachtest O-robot T-sql-experience branch-release-21.2] - #89604 roachtest: gorm failed [C-test-failure O-roachtest O-robot T-sql-experience branch-release-22.2.0] </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*gorm.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-21277
2.0
roachtest: gorm failed - roachtest.gorm [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7377864&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7377864&tab=artifacts#/gorm) on release-22.1 @ [22fa382362ff4ec680d82eef28dcbbaa932942ae](https://github.com/cockroachdb/cockroach/commits/22fa382362ff4ec680d82eef28dcbbaa932942ae): ``` The test failed on branch=release-22.1, cloud=gce: test artifacts and logs in: /artifacts/gorm/run_1 orm_helpers.go:193,orm_helpers.go:119,java_helpers.go:220,gorm.go:129,test_runner.go:883: Tests run on Cockroach v22.1.10-75-g22fa382362 Tests run against gorm v1.24.1 1 Total Tests Run 0 tests passed 1 test failed 0 tests skipped 0 tests ignored 0 tests passed unexpectedly 1 test failed unexpectedly 0 tests expected failed but skipped 0 tests expected failed but not run --- --- FAIL: tests.[build failed] - unknown (unexpected) For a full summary look at the gorm artifacts An updated blocklist (gormBlocklist) is available in the artifacts' gorm log ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> <details><summary>Same failure on other branches</summary> <p> - #89729 roachtest: gorm failed [C-test-failure O-roachtest O-robot T-sql-experience branch-release-21.2] - #89604 roachtest: gorm failed [C-test-failure O-roachtest O-robot T-sql-experience branch-release-22.2.0] </p> </details> /cc @cockroachdb/sql-experience <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*gorm.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-21277
non_main
roachtest gorm failed roachtest gorm with on release the test failed on branch release cloud gce test artifacts and logs in artifacts gorm run orm helpers go orm helpers go java helpers go gorm go test runner go tests run on cockroach tests run against gorm total tests run tests passed test failed tests skipped tests ignored tests passed unexpectedly test failed unexpectedly tests expected failed but skipped tests expected failed but not run fail tests unknown unexpected for a full summary look at the gorm artifacts an updated blocklist gormblocklist is available in the artifacts gorm log help see see same failure on other branches roachtest gorm failed roachtest gorm failed cc cockroachdb sql experience jira issue crdb
0
2,590
8,815,243,679
IssuesEvent
2018-12-29 16:08:59
dgets/nightMiner
https://api.github.com/repos/dgets/nightMiner
opened
Determine whether analytics.Offense.scan_for_enemy_shipyards() is working
help wanted maintainability question
Not quite sure, upon code review, what `analytics.Offense.scan_for_enemy_shipyards()` is doing, aside from the scanning for enemy shipyards that it's supposed to. It is also playing with ship destinations, and was definitely horking a few of them due to continuing to process all ships beyond the 4 that were required in order to keep up a blockade around each _other_ player's shipyard. Code needs a walk-through in order to determine where, exactly, this method is being run from. It _may_ be alright now, though the logic, if it _is_ setting ship destination information and the other stuff, should be **separated, double checked, and fixed**.
True
Determine whether analytics.Offense.scan_for_enemy_shipyards() is working - Not quite sure, upon code review, what `analytics.Offense.scan_for_enemy_shipyards()` is doing, aside from the scanning for enemy shipyards that it's supposed to. It is also playing with ship destinations, and was definitely horking a few of them due to continuing to process all ships beyond the 4 that were required in order to keep up a blockade around each _other_ player's shipyard. Code needs a walk-through in order to determine where, exactly, this method is being run from. It _may_ be alright now, though the logic, if it _is_ setting ship destination information and the other stuff, should be **separated, double checked, and fixed**.
main
determine whether analytics offense scan for enemy shipyards is working not quite sure upon code review what analytics offense scan for enemy shipyards is doing aside from the scanning for enemy shipyards that it s supposed to it is also playing with ship destinations and was definitely horking a few of them due to continuing to process all ships beyond the that were required in order to keep up a blockade around each other player s shipyard code needs a walk through in order to determine where exactly this method is being run from it may be alright now though the logic if it is setting ship destination information and the other stuff should be separated double checked and fixed
1
5,106
26,029,451,869
IssuesEvent
2022-12-21 19:34:15
aws/serverless-application-model
https://api.github.com/repos/aws/serverless-application-model
closed
Api Authorizer must specify at least one identity even authorization caching is disabled
type/bug area/resource/api maintainer/need-followup
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. --> ### Description I want to create a request authorizer which performs authentication if Authorization is existed in header. However, when i try to run sam deploy, it occured an error. **Authorizer must specify Identity with at least one of Headers, QueryStrings, StageVariables, or Context** but the official api docs said (identitySource) > When the authorization caching is not enabled, this property is optional. https://docs.aws.amazon.com/apigateway/api-reference/resource/authorizer/ so I set ReauthorizeEvery to zero so as to disable caching, but it has no effect. ### Steps to reproduce ```.yml Auth: Authorizers: MyRequestAuthorizer: FunctionPayloadType: REQUEST FunctionArn: !GetAtt MyRequestAuthorizerFunction.Arn Identity: ReauthorizeEvery: 0 ``` ### Observed result **Authorizer must specify Identity with at least one of Headers, QueryStrings, StageVariables, or Context** ### Expected result should be able to create a request authorizer with empty identitySource when caching is disabled noted that i can remove all the identitySources under api gateway dashboard. ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) SAM CLI, version 0.44.0 macos 10.15.3
True
Api Authorizer must specify at least one identity even authorization caching is disabled - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. --> ### Description I want to create a request authorizer which performs authentication if Authorization is existed in header. However, when i try to run sam deploy, it occured an error. **Authorizer must specify Identity with at least one of Headers, QueryStrings, StageVariables, or Context** but the official api docs said (identitySource) > When the authorization caching is not enabled, this property is optional. https://docs.aws.amazon.com/apigateway/api-reference/resource/authorizer/ so I set ReauthorizeEvery to zero so as to disable caching, but it has no effect. ### Steps to reproduce ```.yml Auth: Authorizers: MyRequestAuthorizer: FunctionPayloadType: REQUEST FunctionArn: !GetAtt MyRequestAuthorizerFunction.Arn Identity: ReauthorizeEvery: 0 ``` ### Observed result **Authorizer must specify Identity with at least one of Headers, QueryStrings, StageVariables, or Context** ### Expected result should be able to create a request authorizer with empty identitySource when caching is disabled noted that i can remove all the identitySources under api gateway dashboard. ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) SAM CLI, version 0.44.0 macos 10.15.3
main
api authorizer must specify at least one identity even authorization caching is disabled make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description i want to create a request authorizer which performs authentication if authorization is existed in header however when i try to run sam deploy it occured an error authorizer must specify identity with at least one of headers querystrings stagevariables or context but the official api docs said identitysource when the authorization caching is not enabled this property is optional so i set reauthorizeevery to zero so as to disable caching but it has no effect steps to reproduce yml auth authorizers myrequestauthorizer functionpayloadtype request functionarn getatt myrequestauthorizerfunction arn identity reauthorizeevery observed result authorizer must specify identity with at least one of headers querystrings stagevariables or context expected result should be able to create a request authorizer with empty identitysource when caching is disabled noted that i can remove all the identitysources under api gateway dashboard additional environment details ex windows mac amazon linux etc sam cli version macos
1
5,038
25,840,970,207
IssuesEvent
2022-12-13 00:19:02
ElasticPerch/websocket
https://api.github.com/repos/ElasticPerch/websocket
opened
[feature] Support for proxying websocket through https proxy
enhancement waiting on new maintainer
From websocket created by [philipatl](https://github.com/philipatl): gorilla/websocket#739 **Is your feature request related to a problem? Please describe.** I need to use a proxy server that itself requires TLS (proxy URL has https scheme). **Describe the solution you'd like** I should be able to use a proxy server (eg, via ProxyFromEnvironment) that has an https scheme and uses TLS. Currently, `proxy_RegisterDialerType` only registers the 'http' scheme. This should also be able to use my client `TLSClientConfig` which could be necessary to communicate with my proxy. **Describe alternatives you've considered** I do not see any other alternatives. The workaround suggested in comments below on 12/3/21 is impractical, especially when using `ProxyFromEnvironment` and applying this to other libraries. I should not have to mangle my proxy scheme to get this to work.
True
[feature] Support for proxying websocket through https proxy - From websocket created by [philipatl](https://github.com/philipatl): gorilla/websocket#739 **Is your feature request related to a problem? Please describe.** I need to use a proxy server that itself requires TLS (proxy URL has https scheme). **Describe the solution you'd like** I should be able to use a proxy server (eg, via ProxyFromEnvironment) that has an https scheme and uses TLS. Currently, `proxy_RegisterDialerType` only registers the 'http' scheme. This should also be able to use my client `TLSClientConfig` which could be necessary to communicate with my proxy. **Describe alternatives you've considered** I do not see any other alternatives. The workaround suggested in comments below on 12/3/21 is impractical, especially when using `ProxyFromEnvironment` and applying this to other libraries. I should not have to mangle my proxy scheme to get this to work.
main
support for proxying websocket through https proxy from websocket created by gorilla websocket is your feature request related to a problem please describe i need to use a proxy server that itself requires tls proxy url has https scheme describe the solution you d like i should be able to use a proxy server eg via proxyfromenvironment that has an https scheme and uses tls currently proxy registerdialertype only registers the http scheme this should also be able to use my client tlsclientconfig which could be necessary to communicate with my proxy describe alternatives you ve considered i do not see any other alternatives the workaround suggested in comments below on is impractical especially when using proxyfromenvironment and applying this to other libraries i should not have to mangle my proxy scheme to get this to work
1
371,041
10,960,732,061
IssuesEvent
2019-11-27 14:08:36
CBHSQ/findtreatment
https://api.github.com/repos/CBHSQ/findtreatment
closed
Remove feedback form from interface
high-priority
As SAMHSA will not have access to remove the Google feedback form after 11/30, we should remove the form from the FindTreatment.gov and findtreatment.samhsa.gov interfaces until another solution to solicit feedback from visitors is set up. **FindTreatment.gov** - [x] Remove link from footer - [x] Remove beta banner entirely (which includes link) **findtreatment.samhsa.gov** - [x] Remove from samhsa.gov interfaces (@BrooklynHopkins - I think you have a list of where these are on the various pages)
1.0
Remove feedback form from interface - As SAMHSA will not have access to remove the Google feedback form after 11/30, we should remove the form from the FindTreatment.gov and findtreatment.samhsa.gov interfaces until another solution to solicit feedback from visitors is set up. **FindTreatment.gov** - [x] Remove link from footer - [x] Remove beta banner entirely (which includes link) **findtreatment.samhsa.gov** - [x] Remove from samhsa.gov interfaces (@BrooklynHopkins - I think you have a list of where these are on the various pages)
non_main
remove feedback form from interface as samhsa will not have access to remove the google feedback form after we should remove the form from the findtreatment gov and findtreatment samhsa gov interfaces until another solution to solicit feedback from visitors is set up findtreatment gov remove link from footer remove beta banner entirely which includes link findtreatment samhsa gov remove from samhsa gov interfaces brooklynhopkins i think you have a list of where these are on the various pages
0
145,820
13,162,007,523
IssuesEvent
2020-08-10 20:41:26
Visual-Regression-Tracker/Visual-Regression-Tracker
https://api.github.com/repos/Visual-Regression-Tracker/Visual-Regression-Tracker
closed
In documentation it says "diffTollerancePercent" was 1
bug documentation good first issue
Hey guys, your documentation says `diffTollerancePercent` default value was 1, when it is actually 0. This is stated wrong in several places (e.g.: [cypress agent docu](https://www.npmjs.com/package/@visual-regression-tracker/agent-cypress), [js sdk](https://www.npmjs.com/package/@visual-regression-tracker/sdk-js)) No big deal, but I was confused ;)
1.0
In documentation it says "diffTollerancePercent" was 1 - Hey guys, your documentation says `diffTollerancePercent` default value was 1, when it is actually 0. This is stated wrong in several places (e.g.: [cypress agent docu](https://www.npmjs.com/package/@visual-regression-tracker/agent-cypress), [js sdk](https://www.npmjs.com/package/@visual-regression-tracker/sdk-js)) No big deal, but I was confused ;)
non_main
in documentation it says difftollerancepercent was hey guys your documentation says difftollerancepercent default value was when it is actually this is stated wrong in several places e g no big deal but i was confused
0
1,577
6,572,341,634
IssuesEvent
2017-09-11 01:32:49
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
container_definition hostPort and containerPort types
affects_2.2 aws bug_report cloud waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ecs_taskdefinition.py ##### ANSIBLE VERSION ansible 2.2.0 ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT CentOS Linux release 7.2.1511 (Core) ##### SUMMARY Error calling module througu playbook, when params containerPort and/or hostPort are passed as variable: Invalid type for parameter containerDefinitions[0].portMappings[0].containerPort, value: 80, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].portMappings[0].hostPort, value: 80, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> ##### STEPS TO REPRODUCE I am passing the described params from iterating a dict in playbook such as: aplicaciones: swift: id: 'swift' http_port: '{{80|int}}' count: '{{2|int}}' then, I get the values as follows: containers: - name: "{{item.value.id}}" image: '{{id_aws}}.dkr.ecr.{{region_geografica}}.amazonaws.com/{{proyecto}}:{{item.value.id}}' cpu: 10 essential: true memory: 250 portMappings: - containerPort: "{{item.value.http_port|int}}" hostPort: "{{item.value.http_port|int}}" with_dict: "{{aplicaciones}}" I tried to convert the values from string, as can be seen in the above code, but it seems that module doesn't convert this variables or anything. I don't get the error when I specify the ports directly in task_definition: ``` portMappings: - containerPort: 80 hostPort: 80 ``` Running playbook keeping TMP files, and modifying this in ANSIBALLZ_PARAMS: "portMappings": [{"containerPort": "80", "hostPort": "80"}], "memory": 250, "essential": true}]}} to (without double quotes): "portMappings": [{"containerPort": 80, "hostPort": 80}], "memory": 250, "essential": true}]}} makes process run OK. ##### EXPECTED RESULTS Correct creation of task_definition in Amazon ECS. ##### ACTUAL RESULTS Invalid type for parameter containerDefinitions[0].portMappings[0].containerPort, value: 80, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].portMappings[0].hostPort, value: 80, type: <type 'str'>, valid types: <type 'int'>, <type 'long'>
True
container_definition hostPort and containerPort types - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ecs_taskdefinition.py ##### ANSIBLE VERSION ansible 2.2.0 ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT CentOS Linux release 7.2.1511 (Core) ##### SUMMARY Error calling module througu playbook, when params containerPort and/or hostPort are passed as variable: Invalid type for parameter containerDefinitions[0].portMappings[0].containerPort, value: 80, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].portMappings[0].hostPort, value: 80, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> ##### STEPS TO REPRODUCE I am passing the described params from iterating a dict in playbook such as: aplicaciones: swift: id: 'swift' http_port: '{{80|int}}' count: '{{2|int}}' then, I get the values as follows: containers: - name: "{{item.value.id}}" image: '{{id_aws}}.dkr.ecr.{{region_geografica}}.amazonaws.com/{{proyecto}}:{{item.value.id}}' cpu: 10 essential: true memory: 250 portMappings: - containerPort: "{{item.value.http_port|int}}" hostPort: "{{item.value.http_port|int}}" with_dict: "{{aplicaciones}}" I tried to convert the values from string, as can be seen in the above code, but it seems that module doesn't convert this variables or anything. I don't get the error when I specify the ports directly in task_definition: ``` portMappings: - containerPort: 80 hostPort: 80 ``` Running playbook keeping TMP files, and modifying this in ANSIBALLZ_PARAMS: "portMappings": [{"containerPort": "80", "hostPort": "80"}], "memory": 250, "essential": true}]}} to (without double quotes): "portMappings": [{"containerPort": 80, "hostPort": 80}], "memory": 250, "essential": true}]}} makes process run OK. ##### EXPECTED RESULTS Correct creation of task_definition in Amazon ECS. ##### ACTUAL RESULTS Invalid type for parameter containerDefinitions[0].portMappings[0].containerPort, value: 80, type: <type 'str'>, valid types: <type 'int'>, <type 'long'> Invalid type for parameter containerDefinitions[0].portMappings[0].hostPort, value: 80, type: <type 'str'>, valid types: <type 'int'>, <type 'long'>
main
container definition hostport and containerport types issue type bug report component name ecs taskdefinition py ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment centos linux release core summary error calling module througu playbook when params containerport and or hostport are passed as variable invalid type for parameter containerdefinitions portmappings containerport value type valid types invalid type for parameter containerdefinitions portmappings hostport value type valid types steps to reproduce i am passing the described params from iterating a dict in playbook such as aplicaciones swift id swift http port int count int then i get the values as follows containers name item value id image id aws dkr ecr region geografica amazonaws com proyecto item value id cpu essential true memory portmappings containerport item value http port int hostport item value http port int with dict aplicaciones i tried to convert the values from string as can be seen in the above code but it seems that module doesn t convert this variables or anything i don t get the error when i specify the ports directly in task definition portmappings containerport hostport running playbook keeping tmp files and modifying this in ansiballz params portmappings memory essential true to without double quotes portmappings memory essential true makes process run ok expected results correct creation of task definition in amazon ecs actual results invalid type for parameter containerdefinitions portmappings containerport value type valid types invalid type for parameter containerdefinitions portmappings hostport value type valid types
1
343,879
24,789,217,565
IssuesEvent
2022-10-24 12:30:35
personryan/ICT-2101-P4-1
https://api.github.com/repos/personryan/ICT-2101-P4-1
closed
4.1.1 Update Use Case Diagram
documentation enhancement 50%
Use Case Diagram to be updated per M1 Review @Uygnis Deadline: 24/10
1.0
4.1.1 Update Use Case Diagram - Use Case Diagram to be updated per M1 Review @Uygnis Deadline: 24/10
non_main
update use case diagram use case diagram to be updated per review uygnis deadline
0
4,094
19,322,057,595
IssuesEvent
2021-12-14 07:15:03
WarenGonzaga/daisy.js
https://api.github.com/repos/WarenGonzaga/daisy.js
opened
re-initialize the project
chore maintainers only todo tweak
Its been a long time haven't touched the development of this simple library. I'm gonna put the project in maintenance mode at the moment. No more new features at the moment. 😅 I need to update the project. Also, if you have any suggestions, feedback, or an idea. Please let me know by [posting it here](https://github.com/WarenGonzaga/daisy.js/discussions/categories/brainstorm).
True
re-initialize the project - Its been a long time haven't touched the development of this simple library. I'm gonna put the project in maintenance mode at the moment. No more new features at the moment. 😅 I need to update the project. Also, if you have any suggestions, feedback, or an idea. Please let me know by [posting it here](https://github.com/WarenGonzaga/daisy.js/discussions/categories/brainstorm).
main
re initialize the project its been a long time haven t touched the development of this simple library i m gonna put the project in maintenance mode at the moment no more new features at the moment 😅 i need to update the project also if you have any suggestions feedback or an idea please let me know by
1
3,257
12,405,860,426
IssuesEvent
2020-05-21 18:03:21
darekkay/dashboard
https://api.github.com/repos/darekkay/dashboard
closed
Replace enzyme with react-testing-library
Type: Maintainance
[React Testing Library](https://testing-library.com/docs/react-testing-library/intro) encourages better, user-centered tests instead of the [Enzyme](https://github.com/enzymejs/enzyme) way to test implementation details. I've introduced a few tests using react-testing-library already, especially for custom hooks (which Enzyme doesn't handle well). ## Subtasks - [x] Rewrite all enzyme tests with react-testing-library - [x] Remove enzyme - [x] Improve test coverage by writing more (useful) tests - [x] Include [eslint-plugin-testing-library](https://github.com/testing-library/eslint-plugin-testing-library) - [x] Fix all [act warnings](https://kentcdodds.com/blog/fix-the-not-wrapped-in-act-warning) (probably solved automatically by removing enzyme tests) ## Resources: - [Spectrum Community](https://spectrum.chat/testing-library/help-react?tab=posts) - [9 React Testing Library tips and tricks](https://medium.com/better-programming/9-react-testing-library-tips-and-tricks-5cce3e458282)
True
Replace enzyme with react-testing-library - [React Testing Library](https://testing-library.com/docs/react-testing-library/intro) encourages better, user-centered tests instead of the [Enzyme](https://github.com/enzymejs/enzyme) way to test implementation details. I've introduced a few tests using react-testing-library already, especially for custom hooks (which Enzyme doesn't handle well). ## Subtasks - [x] Rewrite all enzyme tests with react-testing-library - [x] Remove enzyme - [x] Improve test coverage by writing more (useful) tests - [x] Include [eslint-plugin-testing-library](https://github.com/testing-library/eslint-plugin-testing-library) - [x] Fix all [act warnings](https://kentcdodds.com/blog/fix-the-not-wrapped-in-act-warning) (probably solved automatically by removing enzyme tests) ## Resources: - [Spectrum Community](https://spectrum.chat/testing-library/help-react?tab=posts) - [9 React Testing Library tips and tricks](https://medium.com/better-programming/9-react-testing-library-tips-and-tricks-5cce3e458282)
main
replace enzyme with react testing library encourages better user centered tests instead of the way to test implementation details i ve introduced a few tests using react testing library already especially for custom hooks which enzyme doesn t handle well subtasks rewrite all enzyme tests with react testing library remove enzyme improve test coverage by writing more useful tests include fix all probably solved automatically by removing enzyme tests resources
1
1,652
6,572,680,204
IssuesEvent
2017-09-11 04:21:26
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Getting value from consul_kv not working when using 'state=acquire'
affects_2.2 bug_report waiting_on_maintainer
##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME consul_kv ##### ANSIBLE VERSION ``` 2.2.1.0 ``` ##### OS / ENVIRONMENT <!--- Linux Mint 18 Installed ansible from source tag 2.2.1.0-0.1.rc1 --> ##### SUMMARY I should be able to retrieve dictionary values from consul. ##### STEPS TO REPRODUCE ``` - name: GET KV consul_kv: key: foo state: acquire register: mydict ``` ##### EXPECTED RESULTS I should be able to extract the Value from `{{ mydict }}` ##### ACTUAL RESULTS ``` fatal: [xx.xx.xx.xx]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "cas": null, "flags": null, "host": "localhost", "key": "foo", "port": 8500, "recurse": null, "retrieve": true, "scheme": "http", "session": null, "state": "acquire", "token": null, "validate_certs": true, "value": null }, "module_name": "consul_kv" }, "msg": "'AnsibleModule' object has no attribute 'fail'" } ``` Not shown in the documentation is the `retreive` flag but that is only used when putting a key into consul. I have other sources that add kv's to consul and want Ansible to retrieve those during a playbook. To me https://github.com/ansible/ansible-modules-extras/blob/stable-2.2/clustering/consul_kv.py#L157 `state == 'release'` should be it's own statement and `if state == 'acquire'` should use it's own method to simply get the KV.
True
Getting value from consul_kv not working when using 'state=acquire' - ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME consul_kv ##### ANSIBLE VERSION ``` 2.2.1.0 ``` ##### OS / ENVIRONMENT <!--- Linux Mint 18 Installed ansible from source tag 2.2.1.0-0.1.rc1 --> ##### SUMMARY I should be able to retrieve dictionary values from consul. ##### STEPS TO REPRODUCE ``` - name: GET KV consul_kv: key: foo state: acquire register: mydict ``` ##### EXPECTED RESULTS I should be able to extract the Value from `{{ mydict }}` ##### ACTUAL RESULTS ``` fatal: [xx.xx.xx.xx]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "cas": null, "flags": null, "host": "localhost", "key": "foo", "port": 8500, "recurse": null, "retrieve": true, "scheme": "http", "session": null, "state": "acquire", "token": null, "validate_certs": true, "value": null }, "module_name": "consul_kv" }, "msg": "'AnsibleModule' object has no attribute 'fail'" } ``` Not shown in the documentation is the `retreive` flag but that is only used when putting a key into consul. I have other sources that add kv's to consul and want Ansible to retrieve those during a playbook. To me https://github.com/ansible/ansible-modules-extras/blob/stable-2.2/clustering/consul_kv.py#L157 `state == 'release'` should be it's own statement and `if state == 'acquire'` should use it's own method to simply get the KV.
main
getting value from consul kv not working when using state acquire issue type bug report component name consul kv ansible version os environment linux mint installed ansible from source tag summary i should be able to retrieve dictionary values from consul steps to reproduce name get kv consul kv key foo state acquire register mydict expected results i should be able to extract the value from mydict actual results fatal failed changed false failed true invocation module args cas null flags null host localhost key foo port recurse null retrieve true scheme http session null state acquire token null validate certs true value null module name consul kv msg ansiblemodule object has no attribute fail not shown in the documentation is the retreive flag but that is only used when putting a key into consul i have other sources that add kv s to consul and want ansible to retrieve those during a playbook to me state release should be it s own statement and if state acquire should use it s own method to simply get the kv
1
2,705
9,528,545,885
IssuesEvent
2019-04-29 08:45:39
cucumber/aruba
https://api.github.com/repos/cucumber/aruba
closed
Reducing running platforms in Travis
difficulty: medium internal needs feedback by community needs feedback by maintainer
## Summary Considering supported platforms. ## Current Behavior Right now aruba supports below platforms. 14 VMs (Virtual Machines) are running. https://github.com/cucumber/aruba/blob/master/.travis.yml Total running time is over than 1 hour. https://travis-ci.org/cucumber/aruba/builds/258426774 > Ran for 1 hr 4 min 4 sec I think that this situation is what we have to improve. ## Possible Solution Cucumber supports below platforms. https://github.com/cucumber/cucumber-ruby/blob/master/.travis.yml I guess that we can remove below 7 cases? ``` - 1.9.3 - 2.0.0 - jruby - jruby-20mode - jruby-21mode ``` ``` - rvm: jruby-9.1.12.0-20mode env: JRUBY_OPTS='--dev' - rvm: jruby-9.1.12.0-21mode env: JRUBY_OPTS='--dev' ``` ## Context & Motivation Reducing the number of VMs, saving the resource improves the total running time. I have experienced similar situation in another project. After reducing the number of VMs, the running time has been improved.
True
Reducing running platforms in Travis - ## Summary Considering supported platforms. ## Current Behavior Right now aruba supports below platforms. 14 VMs (Virtual Machines) are running. https://github.com/cucumber/aruba/blob/master/.travis.yml Total running time is over than 1 hour. https://travis-ci.org/cucumber/aruba/builds/258426774 > Ran for 1 hr 4 min 4 sec I think that this situation is what we have to improve. ## Possible Solution Cucumber supports below platforms. https://github.com/cucumber/cucumber-ruby/blob/master/.travis.yml I guess that we can remove below 7 cases? ``` - 1.9.3 - 2.0.0 - jruby - jruby-20mode - jruby-21mode ``` ``` - rvm: jruby-9.1.12.0-20mode env: JRUBY_OPTS='--dev' - rvm: jruby-9.1.12.0-21mode env: JRUBY_OPTS='--dev' ``` ## Context & Motivation Reducing the number of VMs, saving the resource improves the total running time. I have experienced similar situation in another project. After reducing the number of VMs, the running time has been improved.
main
reducing running platforms in travis summary considering supported platforms current behavior right now aruba supports below platforms vms virtual machines are running total running time is over than hour ran for hr min sec i think that this situation is what we have to improve possible solution cucumber supports below platforms i guess that we can remove below cases jruby jruby jruby rvm jruby env jruby opts dev rvm jruby env jruby opts dev context motivation reducing the number of vms saving the resource improves the total running time i have experienced similar situation in another project after reducing the number of vms the running time has been improved
1
490
3,777,994,300
IssuesEvent
2016-03-17 22:09:40
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
opened
change_release.sh doesn't remove `-dev0`
Difficulty-easy maintainability
### Expected behaviour `maintainer/change_release 0.15.0` This should remove the `-dev0` ending in all the version strings we have. ### Actual behaviour Only the numbers are replaced. I'm not completely understanding the SED regexes that we are using here. The solution would either be to replace the SED regexes or to rewrite the script in python
True
change_release.sh doesn't remove `-dev0` - ### Expected behaviour `maintainer/change_release 0.15.0` This should remove the `-dev0` ending in all the version strings we have. ### Actual behaviour Only the numbers are replaced. I'm not completely understanding the SED regexes that we are using here. The solution would either be to replace the SED regexes or to rewrite the script in python
main
change release sh doesn t remove expected behaviour maintainer change release this should remove the ending in all the version strings we have actual behaviour only the numbers are replaced i m not completely understanding the sed regexes that we are using here the solution would either be to replace the sed regexes or to rewrite the script in python
1
279,427
21,160,121,283
IssuesEvent
2022-04-07 08:36:44
iotaledger/iota.rs
https://api.github.com/repos/iotaledger/iota.rs
closed
[Task]: Examples for android
scope:java priority:2 type:documentation
## Description Add example(s) / tutorial(s) for android platform /cc @kwek20
1.0
[Task]: Examples for android - ## Description Add example(s) / tutorial(s) for android platform /cc @kwek20
non_main
examples for android description add example s tutorial s for android platform cc
0
971
4,710,755,948
IssuesEvent
2016-10-14 11:22:49
simplesamlphp/simplesamlphp
https://api.github.com/repos/simplesamlphp/simplesamlphp
closed
Make a script to convert dictionaries to locales
maintainability
- [x] Combine root domains automatically, into one domain - [x] Combine module domains automatically, one domain per module - [x] Tag is in fully namespaced form: `{foo(:bar):xux}` Root and module-domains should be handled differently.
True
Make a script to convert dictionaries to locales - - [x] Combine root domains automatically, into one domain - [x] Combine module domains automatically, one domain per module - [x] Tag is in fully namespaced form: `{foo(:bar):xux}` Root and module-domains should be handled differently.
main
make a script to convert dictionaries to locales combine root domains automatically into one domain combine module domains automatically one domain per module tag is in fully namespaced form foo bar xux root and module domains should be handled differently
1
1,177
5,096,332,342
IssuesEvent
2017-01-03 17:51:38
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
pip module: logging option ?
affects_2.1 feature_idea waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME pip module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 config file = /opt/tmp/vagrant/homelab/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Orchestrator: Linux Ubuntu Trusty or Xenial Guest: various ##### SUMMARY when you do system package, there is a log file with activity history. python pip has a logging option https://pip.pypa.io/en/stable/reference/pip/#file-logging pip module should make it available. As a note, this remark is valid for gem too but I didn't find a native option. example ``` - pip: name=bottle version=0.11 log=/var/log/pip.log ``` should result in command ``` pip install bottle==0.11 --log /var/log/pip.log ``` Thanks
True
pip module: logging option ? - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME pip module ##### ANSIBLE VERSION ``` $ ansible --version ansible 2.1.2.0 config file = /opt/tmp/vagrant/homelab/ansible.cfg configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT Orchestrator: Linux Ubuntu Trusty or Xenial Guest: various ##### SUMMARY when you do system package, there is a log file with activity history. python pip has a logging option https://pip.pypa.io/en/stable/reference/pip/#file-logging pip module should make it available. As a note, this remark is valid for gem too but I didn't find a native option. example ``` - pip: name=bottle version=0.11 log=/var/log/pip.log ``` should result in command ``` pip install bottle==0.11 --log /var/log/pip.log ``` Thanks
main
pip module logging option issue type feature idea component name pip module ansible version ansible version ansible config file opt tmp vagrant homelab ansible cfg configured module search path default w o overrides os environment orchestrator linux ubuntu trusty or xenial guest various summary when you do system package there is a log file with activity history python pip has a logging option pip module should make it available as a note this remark is valid for gem too but i didn t find a native option example pip name bottle version log var log pip log should result in command pip install bottle log var log pip log thanks
1
3,258
12,407,419,556
IssuesEvent
2020-05-21 21:03:15
ipfs/dir-index-html
https://api.github.com/repos/ipfs/dir-index-html
reopened
Preprocessing: Automate the css bundling and injection into dir-index-uncat.html
help wanted need/maintainers-input
with gulp-inject / gulp-usemin / gulp-useref
True
Preprocessing: Automate the css bundling and injection into dir-index-uncat.html - with gulp-inject / gulp-usemin / gulp-useref
main
preprocessing automate the css bundling and injection into dir index uncat html with gulp inject gulp usemin gulp useref
1
1,954
6,671,199,209
IssuesEvent
2017-10-04 05:44:16
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
cask uninstall google-photos-backup fails, breaks other parts of cask
awaiting maintainer feedback
#### General troubleshooting steps - [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [X] None of the templates was appropriate for my issue, or I’m not sure. - [X] I ran `brew update-reset && brew update` and retried my command. - [X] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue ``` brew cask uninstall google-photos-backup Error: Cask 'google-photos-backup' is unavailable: No Cask with this name exists. ``` As Google moved to `Photos and Backup Sync` and away from separate packages I am left with a "stub" cask that I can't uninstall and is breaking other things: ``` benc$ brew cask list Error: Cask 'google-photos-backup' is unavailable: No Cask with this name exists. ``` homebrew-core added a search function to `.git` to be able to remove old poackage or add to your own repo. Possibly need to `fork` that into caskroom. Or provide a `google-photos-backup` stub cask with no install and a *GIANT* caveat about the migration, to facilitate an upgrade path to `google-photos-backup-and-sync` or at least a means to uninstall `google-photos-backup` as it's causing me all kinds of headaches, which means it's likely to cause problems for users as well. #### Output of your command with `--verbose --debug` ``` benc$ brew cask --verbose --debug uninstall google-photos-backup Error: Cask 'google-photos-backup' is unavailable: No Cask with this name exists. Did you mean “google-photos-backup-and-sync”? /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:53:in `rescue in casks' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:48:in `casks' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:99:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:172:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:172:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:155:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:99:in `<main>' ``` #### Output of `brew cask doctor` ``` benc$ brew doctor Please note that these warnings are just used to help the Homebrew maintainers with debugging if you file an issue. If everything you use Homebrew for is working fine: please don't worry and just ignore them. Thanks! Warning: Python is installed at /Library/Frameworks/Python.framework Homebrew only supports building against the System-provided Python or a brewed Python. In particular, Pythons installed to /Library can interfere with other software installs. Warning: You have unlinked kegs in your Cellar Leaving kegs unlinked can lead to build-trouble and cause brews that depend on those kegs to fail to run properly once built. Run `brew link` on these: docker docker-compose Warning: Some installed formula are missing dependencies. You should `brew install` the missing dependencies: brew install mysql Run `brew missing` for more details. ```
True
cask uninstall google-photos-backup fails, breaks other parts of cask - #### General troubleshooting steps - [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [X] None of the templates was appropriate for my issue, or I’m not sure. - [X] I ran `brew update-reset && brew update` and retried my command. - [X] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue ``` brew cask uninstall google-photos-backup Error: Cask 'google-photos-backup' is unavailable: No Cask with this name exists. ``` As Google moved to `Photos and Backup Sync` and away from separate packages I am left with a "stub" cask that I can't uninstall and is breaking other things: ``` benc$ brew cask list Error: Cask 'google-photos-backup' is unavailable: No Cask with this name exists. ``` homebrew-core added a search function to `.git` to be able to remove old poackage or add to your own repo. Possibly need to `fork` that into caskroom. Or provide a `google-photos-backup` stub cask with no install and a *GIANT* caveat about the migration, to facilitate an upgrade path to `google-photos-backup-and-sync` or at least a means to uninstall `google-photos-backup` as it's causing me all kinds of headaches, which means it's likely to cause problems for users as well. #### Output of your command with `--verbose --debug` ``` benc$ brew cask --verbose --debug uninstall google-photos-backup Error: Cask 'google-photos-backup' is unavailable: No Cask with this name exists. Did you mean “google-photos-backup-and-sync”? /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:53:in `rescue in casks' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:48:in `casks' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:99:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:172:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:172:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:155:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:99:in `<main>' ``` #### Output of `brew cask doctor` ``` benc$ brew doctor Please note that these warnings are just used to help the Homebrew maintainers with debugging if you file an issue. If everything you use Homebrew for is working fine: please don't worry and just ignore them. Thanks! Warning: Python is installed at /Library/Frameworks/Python.framework Homebrew only supports building against the System-provided Python or a brewed Python. In particular, Pythons installed to /Library can interfere with other software installs. Warning: You have unlinked kegs in your Cellar Leaving kegs unlinked can lead to build-trouble and cause brews that depend on those kegs to fail to run properly once built. Run `brew link` on these: docker docker-compose Warning: Some installed formula are missing dependencies. You should `brew install` the missing dependencies: brew install mysql Run `brew missing` for more details. ```
main
cask uninstall google photos backup fails breaks other parts of cask general troubleshooting steps i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or i’m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue brew cask uninstall google photos backup error cask google photos backup is unavailable no cask with this name exists as google moved to photos and backup sync and away from separate packages i am left with a stub cask that i can t uninstall and is breaking other things benc brew cask list error cask google photos backup is unavailable no cask with this name exists homebrew core added a search function to git to be able to remove old poackage or add to your own repo possibly need to fork that into caskroom or provide a google photos backup stub cask with no install and a giant caveat about the migration to facilitate an upgrade path to google photos backup and sync or at least a means to uninstall google photos backup as it s causing me all kinds of headaches which means it s likely to cause problems for users as well output of your command with verbose debug benc brew cask verbose debug uninstall google photos backup error cask google photos backup is unavailable no cask with this name exists did you mean “google photos backup and sync” usr local homebrew library homebrew cask lib hbc cli abstract command rb in rescue in casks usr local homebrew library homebrew cask lib hbc cli abstract command rb in casks usr local homebrew library homebrew cask lib hbc cli uninstall rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor benc brew doctor please note that these warnings are just used to help the homebrew maintainers with debugging if you file an issue if everything you use homebrew for is working fine please don t worry and just ignore them thanks warning python is installed at library frameworks python framework homebrew only supports building against the system provided python or a brewed python in particular pythons installed to library can interfere with other software installs warning you have unlinked kegs in your cellar leaving kegs unlinked can lead to build trouble and cause brews that depend on those kegs to fail to run properly once built run brew link on these docker docker compose warning some installed formula are missing dependencies you should brew install the missing dependencies brew install mysql run brew missing for more details
1
2,465
8,639,902,126
IssuesEvent
2018-11-23 22:32:19
F5OEO/rpitx
https://api.github.com/repos/F5OEO/rpitx
closed
Raspberry Pi 3 support
V1 related (not maintained)
This is more of a question than an issue - do you think `rpitx` should work without modifications on new Raspberry Pi 3? Or if not, are you planning to add support for it? CPU in RPi3 runs on higher frequency, so maybe rpitx should work better for transmitting on higher frequencies too?
True
Raspberry Pi 3 support - This is more of a question than an issue - do you think `rpitx` should work without modifications on new Raspberry Pi 3? Or if not, are you planning to add support for it? CPU in RPi3 runs on higher frequency, so maybe rpitx should work better for transmitting on higher frequencies too?
main
raspberry pi support this is more of a question than an issue do you think rpitx should work without modifications on new raspberry pi or if not are you planning to add support for it cpu in runs on higher frequency so maybe rpitx should work better for transmitting on higher frequencies too
1
4,676
3,066,620,993
IssuesEvent
2015-08-18 03:36:39
codeforamerica/communities
https://api.github.com/repos/codeforamerica/communities
opened
Code for All Oakland Summit travel
code for all
- [ ] Australia flights - [ ] Poland flights - [ ] check with Netherlands - [ ] check with Japan - [ ] rooming list
1.0
Code for All Oakland Summit travel - - [ ] Australia flights - [ ] Poland flights - [ ] check with Netherlands - [ ] check with Japan - [ ] rooming list
non_main
code for all oakland summit travel australia flights poland flights check with netherlands check with japan rooming list
0
5,721
30,249,262,233
IssuesEvent
2023-07-06 19:04:48
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
[Question]: SideNav not closing when clicking links
type: question ❓ status: needs triage 🕵️‍♀️ status: waiting for maintainer response 💬 status: needs reproduction
### Question for Carbon **Package** @carbon/react **Browser** Chrome **Package version** @carbon/react: 1.19.0 **React version** 18.2.0 Description As pointed out in [#3666](https://github.com/carbon-design-system/carbon/issues/3666) the sideNav menu is not collapsing when clicking a link element. The issue to close the menu by clicking anywhere on the overlay was solved in [#8296](https://github.com/carbon-design-system/carbon/pull/8296) but can't find any way to achieve the same behaviour as in the https://carbondesignsystem.com/ and automatically close the side menu. Has this issue been solved? `const MainHeader = () => { return ( <HeaderContainer render={({ isSideNavExpanded, onClickSideNavExpand }) => ( <Header aria-label="Header navigation"> <SkipToContent /> <HeaderMenuButton aria-label={isSideNavExpanded ? "Close menu" : "Open menu"} onClick={onClickSideNavExpand} isActive={isSideNavExpanded} /> ... <SideNav aria-label="Side navigation" expanded={isSideNavExpanded} isPersistent={false} onOverlayClick={onClickSideNavExpand} > <SideNavItems> <HeaderSideNavItems> <HeaderMenuItem element={NavLink} to={`/link1}> Link1 </HeaderMenuItem>...` ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
True
[Question]: SideNav not closing when clicking links - ### Question for Carbon **Package** @carbon/react **Browser** Chrome **Package version** @carbon/react: 1.19.0 **React version** 18.2.0 Description As pointed out in [#3666](https://github.com/carbon-design-system/carbon/issues/3666) the sideNav menu is not collapsing when clicking a link element. The issue to close the menu by clicking anywhere on the overlay was solved in [#8296](https://github.com/carbon-design-system/carbon/pull/8296) but can't find any way to achieve the same behaviour as in the https://carbondesignsystem.com/ and automatically close the side menu. Has this issue been solved? `const MainHeader = () => { return ( <HeaderContainer render={({ isSideNavExpanded, onClickSideNavExpand }) => ( <Header aria-label="Header navigation"> <SkipToContent /> <HeaderMenuButton aria-label={isSideNavExpanded ? "Close menu" : "Open menu"} onClick={onClickSideNavExpand} isActive={isSideNavExpanded} /> ... <SideNav aria-label="Side navigation" expanded={isSideNavExpanded} isPersistent={false} onOverlayClick={onClickSideNavExpand} > <SideNavItems> <HeaderSideNavItems> <HeaderMenuItem element={NavLink} to={`/link1}> Link1 </HeaderMenuItem>...` ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
main
sidenav not closing when clicking links question for carbon package carbon react browser chrome package version carbon react react version description as pointed out in the sidenav menu is not collapsing when clicking a link element the issue to close the menu by clicking anywhere on the overlay was solved in but can t find any way to achieve the same behaviour as in the and automatically close the side menu has this issue been solved const mainheader return headercontainer render issidenavexpanded onclicksidenavexpand headermenubutton aria label issidenavexpanded close menu open menu onclick onclicksidenavexpand isactive issidenavexpanded sidenav aria label side navigation expanded issidenavexpanded ispersistent false onoverlayclick onclicksidenavexpand code of conduct i agree to follow this project s
1
5,043
25,844,179,176
IssuesEvent
2022-12-13 04:22:41
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Partial data loss when attempting to unset scale type_options independently of precision
type: bug work: backend status: ready restricted: maintainers
## Reproduce 1. Set up a Number column with "Decimal Places" = `2` and "Max Digits" = `100`. 1. Add row with a value of `1.23`. 1. Now imagine we'd like to also store the value `6.789` without loss of precision. We'll need to modify or remove the "Decimal Places" setting. 1. Edit the DB type options. Remove the `2` in the "Decimal Places" field (leaving the field blank) and save. 1. Observe a PATCH request to the columns API. 1. Expect the response to have `type_options` of `{ precision: 100, scale: null }` or `{ precision: 100 }`. 1. Observe the response to have `type_options` of `{ precision: 100, scale: 0 }`. 1. Expect the cell value within the newly-added row to still be `1.23`. 1. Observe that the value of the cell has changed to `1` because `scale` has been changed to `0`. This is especially bad because it's **data loss**. ## Notes - The [Postgres docs on numeric types ](https://www.postgresql.org/docs/current/datatype-numeric.html) appear to indicate that Postgres supports NUMERIC with a precision value and without a scale value.
True
Partial data loss when attempting to unset scale type_options independently of precision - ## Reproduce 1. Set up a Number column with "Decimal Places" = `2` and "Max Digits" = `100`. 1. Add row with a value of `1.23`. 1. Now imagine we'd like to also store the value `6.789` without loss of precision. We'll need to modify or remove the "Decimal Places" setting. 1. Edit the DB type options. Remove the `2` in the "Decimal Places" field (leaving the field blank) and save. 1. Observe a PATCH request to the columns API. 1. Expect the response to have `type_options` of `{ precision: 100, scale: null }` or `{ precision: 100 }`. 1. Observe the response to have `type_options` of `{ precision: 100, scale: 0 }`. 1. Expect the cell value within the newly-added row to still be `1.23`. 1. Observe that the value of the cell has changed to `1` because `scale` has been changed to `0`. This is especially bad because it's **data loss**. ## Notes - The [Postgres docs on numeric types ](https://www.postgresql.org/docs/current/datatype-numeric.html) appear to indicate that Postgres supports NUMERIC with a precision value and without a scale value.
main
partial data loss when attempting to unset scale type options independently of precision reproduce set up a number column with decimal places and max digits add row with a value of now imagine we d like to also store the value without loss of precision we ll need to modify or remove the decimal places setting edit the db type options remove the in the decimal places field leaving the field blank and save observe a patch request to the columns api expect the response to have type options of precision scale null or precision observe the response to have type options of precision scale expect the cell value within the newly added row to still be observe that the value of the cell has changed to because scale has been changed to this is especially bad because it s data loss notes the appear to indicate that postgres supports numeric with a precision value and without a scale value
1
108,521
16,777,961,211
IssuesEvent
2021-06-15 01:25:32
Thanraj/linux-4.1.15
https://api.github.com/repos/Thanraj/linux-4.1.15
opened
CVE-2021-3564 (Medium) detected in linux-stable-rtv4.1.33
security vulnerability
## CVE-2021-3564 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/net/bluetooth/hci_core.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/net/bluetooth/hci_core.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw double-free memory corruption in the Linux kernel HCI device initialization subsystem was found in the way user attach malicious HCI TTY Bluetooth device. A local user could use this flaw to crash the system. This flaw affects all the Linux kernel versions starting from 3.13. <p>Publish Date: 2021-06-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3564>CVE-2021-3564</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3564">https://www.linuxkernelcves.com/cves/CVE-2021-3564</a></p> <p>Release Date: 2021-06-08</p> <p>Fix Resolution: v5.13-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3564 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2021-3564 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/net/bluetooth/hci_core.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/net/bluetooth/hci_core.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw double-free memory corruption in the Linux kernel HCI device initialization subsystem was found in the way user attach malicious HCI TTY Bluetooth device. A local user could use this flaw to crash the system. This flaw affects all the Linux kernel versions starting from 3.13. <p>Publish Date: 2021-06-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3564>CVE-2021-3564</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-3564">https://www.linuxkernelcves.com/cves/CVE-2021-3564</a></p> <p>Release Date: 2021-06-08</p> <p>Fix Resolution: v5.13-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files linux net bluetooth hci core c linux net bluetooth hci core c vulnerability details a flaw double free memory corruption in the linux kernel hci device initialization subsystem was found in the way user attach malicious hci tty bluetooth device a local user could use this flaw to crash the system this flaw affects all the linux kernel versions starting from publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
113,711
17,150,882,489
IssuesEvent
2021-07-13 20:25:54
snowdensb/braindump
https://api.github.com/repos/snowdensb/braindump
opened
WS-2020-0345 (High) detected in jsonpointer-4.0.0.tgz
security vulnerability
## WS-2020-0345 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonpointer-4.0.0.tgz</b></p></summary> <p>Simple JSON Addressing.</p> <p>Library home page: <a href="https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.0.tgz">https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.0.tgz</a></p> <p>Path to dependency file: braindump/package.json</p> <p>Path to vulnerable library: braindump/node_modules/jsonpointer</p> <p> Dependency Hierarchy: - gulp-sass-2.3.2.tgz (Root Library) - node-sass-3.12.1.tgz - request-2.78.0.tgz - har-validator-2.0.6.tgz - is-my-json-valid-2.15.0.tgz - :x: **jsonpointer-4.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Prototype Pollution vulnerability was found in jsonpointer before 4.1.0 via the set function. <p>Publish Date: 2020-07-03 <p>URL: <a href=https://github.com/janl/node-jsonpointer/commit/234e3437019c6c07537ed2ad1e03b3e132b85e34>WS-2020-0345</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/janl/node-jsonpointer/releases/tag/v4.1.0">https://github.com/janl/node-jsonpointer/releases/tag/v4.1.0</a></p> <p>Release Date: 2020-07-03</p> <p>Fix Resolution: jsonpointer - 4.1.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"jsonpointer","packageVersion":"4.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"gulp-sass:2.3.2;node-sass:3.12.1;request:2.78.0;har-validator:2.0.6;is-my-json-valid:2.15.0;jsonpointer:4.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jsonpointer - 4.1.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2020-0345","vulnerabilityDetails":"Prototype Pollution vulnerability was found in jsonpointer before 4.1.0 via the set function.","vulnerabilityUrl":"https://github.com/janl/node-jsonpointer/commit/234e3437019c6c07537ed2ad1e03b3e132b85e34","cvss3Severity":"high","cvss3Score":"8.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
WS-2020-0345 (High) detected in jsonpointer-4.0.0.tgz - ## WS-2020-0345 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsonpointer-4.0.0.tgz</b></p></summary> <p>Simple JSON Addressing.</p> <p>Library home page: <a href="https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.0.tgz">https://registry.npmjs.org/jsonpointer/-/jsonpointer-4.0.0.tgz</a></p> <p>Path to dependency file: braindump/package.json</p> <p>Path to vulnerable library: braindump/node_modules/jsonpointer</p> <p> Dependency Hierarchy: - gulp-sass-2.3.2.tgz (Root Library) - node-sass-3.12.1.tgz - request-2.78.0.tgz - har-validator-2.0.6.tgz - is-my-json-valid-2.15.0.tgz - :x: **jsonpointer-4.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Prototype Pollution vulnerability was found in jsonpointer before 4.1.0 via the set function. <p>Publish Date: 2020-07-03 <p>URL: <a href=https://github.com/janl/node-jsonpointer/commit/234e3437019c6c07537ed2ad1e03b3e132b85e34>WS-2020-0345</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/janl/node-jsonpointer/releases/tag/v4.1.0">https://github.com/janl/node-jsonpointer/releases/tag/v4.1.0</a></p> <p>Release Date: 2020-07-03</p> <p>Fix Resolution: jsonpointer - 4.1.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"jsonpointer","packageVersion":"4.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"gulp-sass:2.3.2;node-sass:3.12.1;request:2.78.0;har-validator:2.0.6;is-my-json-valid:2.15.0;jsonpointer:4.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jsonpointer - 4.1.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2020-0345","vulnerabilityDetails":"Prototype Pollution vulnerability was found in jsonpointer before 4.1.0 via the set function.","vulnerabilityUrl":"https://github.com/janl/node-jsonpointer/commit/234e3437019c6c07537ed2ad1e03b3e132b85e34","cvss3Severity":"high","cvss3Score":"8.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_main
ws high detected in jsonpointer tgz ws high severity vulnerability vulnerable library jsonpointer tgz simple json addressing library home page a href path to dependency file braindump package json path to vulnerable library braindump node modules jsonpointer dependency hierarchy gulp sass tgz root library node sass tgz request tgz har validator tgz is my json valid tgz x jsonpointer tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution vulnerability was found in jsonpointer before via the set function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jsonpointer isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree gulp sass node sass request har validator is my json valid jsonpointer isminimumfixversionavailable true minimumfixversion jsonpointer basebranches vulnerabilityidentifier ws vulnerabilitydetails prototype pollution vulnerability was found in jsonpointer before via the set function vulnerabilityurl
0
91,166
18,357,471,555
IssuesEvent
2021-10-08 20:30:59
wmgeolab/geoBoundaries
https://api.github.com/repos/wmgeolab/geoBoundaries
opened
[FEATURE REQUEST]
codeBug
Either the metadata build or the API build is adding "nans" as sources when no source exists (rather than "").
1.0
[FEATURE REQUEST] - Either the metadata build or the API build is adding "nans" as sources when no source exists (rather than "").
non_main
either the metadata build or the api build is adding nans as sources when no source exists rather than
0
37,057
8,215,027,971
IssuesEvent
2018-09-05 02:53:50
ankitpokhrel/tus-php
https://api.github.com/repos/ankitpokhrel/tus-php
closed
redis host config get error
defect
In my laravel case, `REDIS_HOST` is not `127.0.0.1`, it config in laravel `.env` file, if i use the clear command bin like reademe : `./vendor/bin/tus tus:expired redis`, it will get an error ``` In AbstractConnection.php line 155: Connection refused [tcp://127.0.0.1:6379] ``` https://github.com/ankitpokhrel/tus-php/blob/master/src/Cache/CacheFactory.php#L20 because there can't get the `REDIS_HOST` env value, `./vendor/bin/tus tus:expired redis` don't run via laravel framework, so laravel config environment does't exist, my solution is make it run in laravel framework, add to laravel schedule job, like this: ``` app/Console/Kernel.php protected function schedule(Schedule $schedule) { $schedule->exec('./vendor/bin/tus tus:expired redis')->dailyAt('01:00'); } ``` I think you can update readme to help other people or do a better solution. 😎
1.0
redis host config get error - In my laravel case, `REDIS_HOST` is not `127.0.0.1`, it config in laravel `.env` file, if i use the clear command bin like reademe : `./vendor/bin/tus tus:expired redis`, it will get an error ``` In AbstractConnection.php line 155: Connection refused [tcp://127.0.0.1:6379] ``` https://github.com/ankitpokhrel/tus-php/blob/master/src/Cache/CacheFactory.php#L20 because there can't get the `REDIS_HOST` env value, `./vendor/bin/tus tus:expired redis` don't run via laravel framework, so laravel config environment does't exist, my solution is make it run in laravel framework, add to laravel schedule job, like this: ``` app/Console/Kernel.php protected function schedule(Schedule $schedule) { $schedule->exec('./vendor/bin/tus tus:expired redis')->dailyAt('01:00'); } ``` I think you can update readme to help other people or do a better solution. 😎
non_main
redis host config get error in my laravel case redis host is not it config in laravel env file if i use the clear command bin like reademe vendor bin tus tus expired redis it will get an error in abstractconnection php line connection refused because there can t get the redis host env value vendor bin tus tus expired redis don t run via laravel framework so laravel config environment does t exist my solution is make it run in laravel framework, add to laravel schedule job like this: app console kernel php protected function schedule schedule schedule schedule exec vendor bin tus tus expired redis dailyat i think you can update readme to help other people or do a better solution 😎
0
2,810
10,057,350,481
IssuesEvent
2019-07-22 11:24:10
simplesamlphp/simplesamlphp
https://api.github.com/repos/simplesamlphp/simplesamlphp
closed
\SimpleSAML\Store\SQL use of CREATE INDEX
enhancement low maintainability
I have begun testing upgrading to simplesamlphp 1.16.2, we use the SQL store with MySQL. One challenge I ran into is the table updates use the CREATE INDEX command to add an index to the _expire column. The problem was even though our db user had CREATE & ALTER permissions the CREATE INDEX command requires the system level INDEX permission. I was able to workaround this but if the index was created as either part of the CREATE TABLE or used ALTER TABLE then the db/table level permission would be sufficient. I am most familiar with MySQL, so I don't know if this limitation is an oddity of MySQL or how changing index creation would effect other database software. For reference https://bugs.mysql.com/bug.php?id=59767
True
\SimpleSAML\Store\SQL use of CREATE INDEX - I have begun testing upgrading to simplesamlphp 1.16.2, we use the SQL store with MySQL. One challenge I ran into is the table updates use the CREATE INDEX command to add an index to the _expire column. The problem was even though our db user had CREATE & ALTER permissions the CREATE INDEX command requires the system level INDEX permission. I was able to workaround this but if the index was created as either part of the CREATE TABLE or used ALTER TABLE then the db/table level permission would be sufficient. I am most familiar with MySQL, so I don't know if this limitation is an oddity of MySQL or how changing index creation would effect other database software. For reference https://bugs.mysql.com/bug.php?id=59767
main
simplesaml store sql use of create index i have begun testing upgrading to simplesamlphp we use the sql store with mysql one challenge i ran into is the table updates use the create index command to add an index to the expire column the problem was even though our db user had create alter permissions the create index command requires the system level index permission i was able to workaround this but if the index was created as either part of the create table or used alter table then the db table level permission would be sufficient i am most familiar with mysql so i don t know if this limitation is an oddity of mysql or how changing index creation would effect other database software for reference
1
81,917
7,807,253,563
IssuesEvent
2018-06-11 16:16:03
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: tpmc/w=1/nodes=3 failed on master
C-test-failure O-robot
SHA: https://github.com/cockroachdb/cockroach/commits/2f2a1dfdef6abb338cab6fb821b8091227263939 Parameters: Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=709795&tab=buildLog ``` cluster.go:487: /home/agent/work/.go/bin/roachprod create teamcity-709795-tpmc-w-1-nodes-3 -n 4 --gce-machine-type=n1-standard-4 --gce-zones=us-central1-b,us-west1-b,europe-west2-b: exit status 1 ```
1.0
roachtest: tpmc/w=1/nodes=3 failed on master - SHA: https://github.com/cockroachdb/cockroach/commits/2f2a1dfdef6abb338cab6fb821b8091227263939 Parameters: Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=709795&tab=buildLog ``` cluster.go:487: /home/agent/work/.go/bin/roachprod create teamcity-709795-tpmc-w-1-nodes-3 -n 4 --gce-machine-type=n1-standard-4 --gce-zones=us-central1-b,us-west1-b,europe-west2-b: exit status 1 ```
non_main
roachtest tpmc w nodes failed on master sha parameters failed test cluster go home agent work go bin roachprod create teamcity tpmc w nodes n gce machine type standard gce zones us b us b europe b exit status
0
2,523
8,655,460,456
IssuesEvent
2018-11-27 16:00:35
codestation/qcma
https://api.github.com/repos/codestation/qcma
closed
Add a catagory for rePatch?
unmaintained
with the recent updates to game patching (translations, etc), would it be possible to add a category for rePatch? So on backup it can scan for the rePatch dir on ux0 for matching titleID folders and back those up, and on restore it can restore a matching rePatch titleID dir or ask to restore said folder along with the game/save data?
True
Add a catagory for rePatch? - with the recent updates to game patching (translations, etc), would it be possible to add a category for rePatch? So on backup it can scan for the rePatch dir on ux0 for matching titleID folders and back those up, and on restore it can restore a matching rePatch titleID dir or ask to restore said folder along with the game/save data?
main
add a catagory for repatch with the recent updates to game patching translations etc would it be possible to add a category for repatch so on backup it can scan for the repatch dir on for matching titleid folders and back those up and on restore it can restore a matching repatch titleid dir or ask to restore said folder along with the game save data
1
196,001
15,571,153,423
IssuesEvent
2021-03-17 04:14:11
Treescrub/AcornLib
https://api.github.com/repos/Treescrub/AcornLib
opened
Add example scripts
documentation
Add example scripts for each module, maybe write/rewrite a somewhat complicated mutation.
1.0
Add example scripts - Add example scripts for each module, maybe write/rewrite a somewhat complicated mutation.
non_main
add example scripts add example scripts for each module maybe write rewrite a somewhat complicated mutation
0
4,945
25,455,551,693
IssuesEvent
2022-11-24 13:55:24
pace/bricks
https://api.github.com/repos/pace/bricks
closed
Add metrics roundtripper and readme
T::Maintainance
We want to reduce the repeated efforts when dealing with REST / HTTP APIs. Follow up task to #53 ### Tasks * [ ] Add README * [ ] Metrics RoundTripper
True
Add metrics roundtripper and readme - We want to reduce the repeated efforts when dealing with REST / HTTP APIs. Follow up task to #53 ### Tasks * [ ] Add README * [ ] Metrics RoundTripper
main
add metrics roundtripper and readme we want to reduce the repeated efforts when dealing with rest http apis follow up task to tasks add readme metrics roundtripper
1
3,241
12,368,706,819
IssuesEvent
2020-05-18 14:13:31
Kashdeya/Tiny-Progressions
https://api.github.com/repos/Kashdeya/Tiny-Progressions
closed
Infinite Water Bucket does not work in Vanilla Dispensers
Version not Maintainted
Not much else to say here. It spits the actual item out, but doesn't place water.
True
Infinite Water Bucket does not work in Vanilla Dispensers - Not much else to say here. It spits the actual item out, but doesn't place water.
main
infinite water bucket does not work in vanilla dispensers not much else to say here it spits the actual item out but doesn t place water
1
1,914
6,577,641,954
IssuesEvent
2017-09-12 02:18:57
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
Casks with `signal` fail while uninstalling
awaiting maintainer feedback
#### General troubleshooting steps - [x] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [x] None of the templates was appropriate for my issue, or I’m not sure. - [x] I ran `brew update-reset && brew update` and retried my command. - [x] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue ``` brew cask uninstall macs-fan-control ==> Uninstalling Cask macs-fan-control ==> Running uninstall process for macs-fan-control; your password may be necessa ==> Signalling 'TERM' to application ID 'com.crystalidea.MacsFanControl' Error: unknown keywords: verbose, force ``` #### Output of your command with `--verbose --debug` ``` brew cask uninstall --verbose --debug macs-fan-control ==> Uninstalling Cask macs-fan-control ==> Uninstalling Cask macs-fan-control ==> Un-installing artifacts ==> Determining which artifacts are present in Cask macs-fan-control ==> 3 artifact/s defined Error: undefined local variable or method `summarize' for #<Hbc::Artifact::Uninstall:0x007fdfe90409e0> Follow the instructions here: https://github.com/caskroom/homebrew-cask#reporting-bugs /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_artifact.rb:67:in `to_s' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `puts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `puts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `odebug' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:379:in `uninstall_artifacts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:370:in `uninstall' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:22:in `block in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:155:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>' ``` #### Output of `brew cask doctor` ``` HOMEBREW_VERSION: >1.2.0 (no git repository) ORIGIN: https://github.com/Homebrew/brew HEAD: ef67b77d95c6cad9e1ba027189a44876119d1739 Last commit: 6 hours ago Core tap ORIGIN: https://github.com/Homebrew/homebrew-core Core tap HEAD: 8889087e38b6e34910935a0da0819194b482b60f Core tap last commit: 14 hours ago HOMEBREW_PREFIX: /usr/local HOMEBREW_REPOSITORY: /usr/local/Homebrew HOMEBREW_CELLAR: /usr/local/Cellar HOMEBREW_BOTTLE_DOMAIN: https://homebrew.bintray.com CPU: dual-core 64-bit sandybridge Homebrew Ruby: 2.0.0-p648 Clang: 8.1 build 802 Git: 2.11.0 => /Library/Developer/CommandLineTools/usr/bin/git Perl: /usr/bin/perl Python: /usr/bin/python Ruby: /usr/bin/ruby => /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby Java: N/A macOS: 10.12.6-x86_64 Xcode: N/A CLT: 8.3.2.0.1.1492020469 X11: N/A ```
True
Casks with `signal` fail while uninstalling - #### General troubleshooting steps - [x] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [x] None of the templates was appropriate for my issue, or I’m not sure. - [x] I ran `brew update-reset && brew update` and retried my command. - [x] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue ``` brew cask uninstall macs-fan-control ==> Uninstalling Cask macs-fan-control ==> Running uninstall process for macs-fan-control; your password may be necessa ==> Signalling 'TERM' to application ID 'com.crystalidea.MacsFanControl' Error: unknown keywords: verbose, force ``` #### Output of your command with `--verbose --debug` ``` brew cask uninstall --verbose --debug macs-fan-control ==> Uninstalling Cask macs-fan-control ==> Uninstalling Cask macs-fan-control ==> Un-installing artifacts ==> Determining which artifacts are present in Cask macs-fan-control ==> 3 artifact/s defined Error: undefined local variable or method `summarize' for #<Hbc::Artifact::Uninstall:0x007fdfe90409e0> Follow the instructions here: https://github.com/caskroom/homebrew-cask#reporting-bugs /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/abstract_artifact.rb:67:in `to_s' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `puts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `puts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/utils.rb:23:in `odebug' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:379:in `uninstall_artifacts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:370:in `uninstall' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:22:in `block in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/uninstall.rb:12:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:97:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:167:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:178:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:155:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:131:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:95:in `<main>' ``` #### Output of `brew cask doctor` ``` HOMEBREW_VERSION: >1.2.0 (no git repository) ORIGIN: https://github.com/Homebrew/brew HEAD: ef67b77d95c6cad9e1ba027189a44876119d1739 Last commit: 6 hours ago Core tap ORIGIN: https://github.com/Homebrew/homebrew-core Core tap HEAD: 8889087e38b6e34910935a0da0819194b482b60f Core tap last commit: 14 hours ago HOMEBREW_PREFIX: /usr/local HOMEBREW_REPOSITORY: /usr/local/Homebrew HOMEBREW_CELLAR: /usr/local/Cellar HOMEBREW_BOTTLE_DOMAIN: https://homebrew.bintray.com CPU: dual-core 64-bit sandybridge Homebrew Ruby: 2.0.0-p648 Clang: 8.1 build 802 Git: 2.11.0 => /Library/Developer/CommandLineTools/usr/bin/git Perl: /usr/bin/perl Python: /usr/bin/python Ruby: /usr/bin/ruby => /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby Java: N/A macOS: 10.12.6-x86_64 Xcode: N/A CLT: 8.3.2.0.1.1492020469 X11: N/A ```
main
casks with signal fail while uninstalling general troubleshooting steps i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or i’m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue brew cask uninstall macs fan control uninstalling cask macs fan control running uninstall process for macs fan control your password may be necessa signalling term to application id com crystalidea macsfancontrol error unknown keywords verbose force output of your command with verbose debug brew cask uninstall verbose debug macs fan control uninstalling cask macs fan control uninstalling cask macs fan control un installing artifacts determining which artifacts are present in cask macs fan control artifact s defined error undefined local variable or method summarize for follow the instructions here usr local homebrew library homebrew cask lib hbc artifact abstract artifact rb in to s usr local homebrew library homebrew cask lib hbc utils rb in puts usr local homebrew library homebrew cask lib hbc utils rb in puts usr local homebrew library homebrew cask lib hbc utils rb in odebug usr local homebrew library homebrew cask lib hbc installer rb in uninstall artifacts usr local homebrew library homebrew cask lib hbc installer rb in uninstall usr local homebrew library homebrew cask lib hbc cli uninstall rb in block in run usr local homebrew library homebrew cask lib hbc cli uninstall rb in each usr local homebrew library homebrew cask lib hbc cli uninstall rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor homebrew version no git repository origin head last commit hours ago core tap origin core tap head core tap last commit hours ago homebrew prefix usr local homebrew repository usr local homebrew homebrew cellar usr local cellar homebrew bottle domain cpu dual core bit sandybridge homebrew ruby clang build git library developer commandlinetools usr bin git perl usr bin perl python usr bin python ruby usr bin ruby system library frameworks ruby framework versions usr bin ruby java n a macos xcode n a clt n a
1
704
4,281,363,214
IssuesEvent
2016-07-15 02:22:33
duckduckgo/zeroclickinfo-goodies
https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies
closed
Nodejs Cheat Sheet: Doesn't trigger for Node.js
Maintainer Input Requested PR Received
IA gets triggered for "Nodejs cheatsheet" but not for "Node.js cheatsheet" ------ IA Page: http://duck.co/ia/view/nodejs_cheat_sheet [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @stanly-johnson
True
Nodejs Cheat Sheet: Doesn't trigger for Node.js - IA gets triggered for "Nodejs cheatsheet" but not for "Node.js cheatsheet" ------ IA Page: http://duck.co/ia/view/nodejs_cheat_sheet [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @stanly-johnson
main
nodejs cheat sheet doesn t trigger for node js ia gets triggered for nodejs cheatsheet but not for node js cheatsheet ia page stanly johnson
1
24,870
7,574,754,717
IssuesEvent
2018-04-23 22:06:34
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
closed
`envoy --version` reports old git SHA
bug build
*Title*: `envoy --version` reports old git SHA *Description*: Re-linking envoy binary no longer updates the git SHA linkstamp shown by `envoy --version`. This may be due to a change in bazel 0.10.0, from the release notes, under *Important changes*: "- Linkstamping is now a separate and full-blown CppCompileAction, it's no longer a part of linking command." *Repro steps*: ``` $ bazel build //source/exe:envoy-static $ bazel-bin/source/exe/envoy-static --version bazel-bin/source/exe/envoy-static version: 97b69ce6a507471180c0585b6742c627962b433b/1.6.0-dev/Clean/DEBUG $ touch empty-file $ git add empty-file $ git commit -a -m "testing." $ git log --pretty=oneline 35931e6a89f38fe8ad3334e1d13bf056e2464f4d testing. 97b69ce6a507471180c0585b6742c627962b433b test: automatically registering integration ports from listener names (#2536) ... $ rm bazel-bin/source/exe/envoy-static $ bazel build //source/exe:envoy-static $ bazel-bin/source/exe/envoy-static --version bazel-bin/source/exe/envoy-static version: 97b69ce6a507471180c0585b6742c627962b433b/1.6.0-dev/Clean/DEBUG ``` Note that the version was not updated even though the binary was relinked.
1.0
`envoy --version` reports old git SHA - *Title*: `envoy --version` reports old git SHA *Description*: Re-linking envoy binary no longer updates the git SHA linkstamp shown by `envoy --version`. This may be due to a change in bazel 0.10.0, from the release notes, under *Important changes*: "- Linkstamping is now a separate and full-blown CppCompileAction, it's no longer a part of linking command." *Repro steps*: ``` $ bazel build //source/exe:envoy-static $ bazel-bin/source/exe/envoy-static --version bazel-bin/source/exe/envoy-static version: 97b69ce6a507471180c0585b6742c627962b433b/1.6.0-dev/Clean/DEBUG $ touch empty-file $ git add empty-file $ git commit -a -m "testing." $ git log --pretty=oneline 35931e6a89f38fe8ad3334e1d13bf056e2464f4d testing. 97b69ce6a507471180c0585b6742c627962b433b test: automatically registering integration ports from listener names (#2536) ... $ rm bazel-bin/source/exe/envoy-static $ bazel build //source/exe:envoy-static $ bazel-bin/source/exe/envoy-static --version bazel-bin/source/exe/envoy-static version: 97b69ce6a507471180c0585b6742c627962b433b/1.6.0-dev/Clean/DEBUG ``` Note that the version was not updated even though the binary was relinked.
non_main
envoy version reports old git sha title envoy version reports old git sha description re linking envoy binary no longer updates the git sha linkstamp shown by envoy version this may be due to a change in bazel from the release notes under important changes linkstamping is now a separate and full blown cppcompileaction it s no longer a part of linking command repro steps bazel build source exe envoy static bazel bin source exe envoy static version bazel bin source exe envoy static version dev clean debug touch empty file git add empty file git commit a m testing git log pretty oneline testing test automatically registering integration ports from listener names rm bazel bin source exe envoy static bazel build source exe envoy static bazel bin source exe envoy static version bazel bin source exe envoy static version dev clean debug note that the version was not updated even though the binary was relinked
0
4,282
21,527,987,331
IssuesEvent
2022-04-28 20:33:22
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Error when trying to load tables
type: bug work: backend status: ready restricted: maintainers
## Prerequisite data [volumes.zip](https://github.com/centerofci/mathesar/files/8586108/volumes.zip) ## Reproduce 1. Go to `http://localhost:8000/mathesar_tables/2/?t=W1tdLG51bGxd` to load the "astronomy" schema. 1. Switch to the "public" schema. 1. Observe a 500 response from `/api/db/v0/tables/` with error code 4999 and the following message: > "Got KeyError when attempting to get a value for field `input` on serializer `BooleanDisplayOptionSerializer`.\nThe serializer field might be named incorrectly and not match any attribute or key on the `dict` instance.\nOriginal exception text was: 'input'."
True
Error when trying to load tables - ## Prerequisite data [volumes.zip](https://github.com/centerofci/mathesar/files/8586108/volumes.zip) ## Reproduce 1. Go to `http://localhost:8000/mathesar_tables/2/?t=W1tdLG51bGxd` to load the "astronomy" schema. 1. Switch to the "public" schema. 1. Observe a 500 response from `/api/db/v0/tables/` with error code 4999 and the following message: > "Got KeyError when attempting to get a value for field `input` on serializer `BooleanDisplayOptionSerializer`.\nThe serializer field might be named incorrectly and not match any attribute or key on the `dict` instance.\nOriginal exception text was: 'input'."
main
error when trying to load tables prerequisite data reproduce go to to load the astronomy schema switch to the public schema observe a response from api db tables with error code and the following message got keyerror when attempting to get a value for field input on serializer booleandisplayoptionserializer nthe serializer field might be named incorrectly and not match any attribute or key on the dict instance noriginal exception text was input
1
3,428
13,182,849,668
IssuesEvent
2020-08-12 16:27:42
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
closed
replace deprecated imp with importlib
maintainability upstream
## Expected behavior ## <!-- A clear and concise description of what you want to do and what you think should happen. (Code to reproduce the behavior can be added below). --> No deprecation warnings for standard library imports. ## Actual behavior ## chemlib ~and H5MD~ raise > DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses (This is due to the mock in #2723 and ~#2787~ #2894 (open – will be corrected there).) ## Code to reproduce the behavior ## Run `pytest` with visible warnings. ## Current version of MDAnalysis ## - Which version are you using? (run `python -c "import MDAnalysis as mda; print(mda.__version__)"`) current dev - Which version of Python (`python -V`)? 3.7 - Which operating system? macOS
True
replace deprecated imp with importlib - ## Expected behavior ## <!-- A clear and concise description of what you want to do and what you think should happen. (Code to reproduce the behavior can be added below). --> No deprecation warnings for standard library imports. ## Actual behavior ## chemlib ~and H5MD~ raise > DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses (This is due to the mock in #2723 and ~#2787~ #2894 (open – will be corrected there).) ## Code to reproduce the behavior ## Run `pytest` with visible warnings. ## Current version of MDAnalysis ## - Which version are you using? (run `python -c "import MDAnalysis as mda; print(mda.__version__)"`) current dev - Which version of Python (`python -V`)? 3.7 - Which operating system? macOS
main
replace deprecated imp with importlib expected behavior no deprecation warnings for standard library imports actual behavior chemlib and raise deprecationwarning the imp module is deprecated in favour of importlib see the module s documentation for alternative uses this is due to the mock in and open – will be corrected there code to reproduce the behavior run pytest with visible warnings current version of mdanalysis which version are you using run python c import mdanalysis as mda print mda version current dev which version of python python v which operating system macos
1
1,129
4,998,415,489
IssuesEvent
2016-12-09 19:47:06
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Using junos_config to overwrite config does not work
affects_2.2 bug_report networking waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config module + module_utils/junos.py ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION Stock - no changes ##### OS / ENVIRONMENT Running on Ubuntu 4.4.0.51, but should be platform independent. ##### SUMMARY The junos_config module documentation does not allow for overwriting configuration (similar to the **load override** Junos CLI command). The module documentation states that using the **replace: yes** option will work, but is considered deprecated; and to use the **update: replace** option instead. However, neither of these keywords actually work - **replace: yes** lets the playbook run, but does not actually perform a **load override** but a **load merge**; and **update: replace** fails with an unknown parameter for the module. Digging in the module and module_utils code, it seems that the expected parameter to use is actually **overwrite: yes**, but this also fails with an unknown keyword. The module also does not seem to call the load_config function from the module_utils/junos.py with the correct arguments, causing load_config to default the overwrite variable to False on init. In addition to this, it seems that the logic in the module_utils/junos.py resource file is wrong for the overwrite clause - it sets **merge = True** and **overwrite = False**; I'm guessing this should be the other way. ##### STEPS TO REPRODUCE Run a playbook with the junos_config command to any Junos device. Include a complete config as the source and try various combinations of the parameters described above (**replace: yes**, **update: replace** and **overwrite: yes**). When running with **replace: yes**; I suggest attempting this against a switch, and trying to change the VLAN of an access port (the playbook will fail stating you can only have a single VLAN on an access port, since it's merging rather than replacing) or against a router changing an interface IP address (instead of replacing, the config will add a second IP to the interface in question). The other two cases above will fail with a parameter error. <!--- Paste example playbooks or commands between quotes below --> ``` - name: Push config to devices hosts: it-office-switches gather_facts: no tasks: - name: Installing config junos_config: host: "{{ junos_ip }}" port: 22 username: "{{ junos_user }}" password: "{{ junos_password }}" update: replace comment: "Installing baseline config via Ansible" src: "{{ output_dir }}/config.conf" src_format: text ``` ##### EXPECTED RESULTS My goal was to push a config to a Junos device and have it apply it as if I ran a **load override** command. ##### ACTUAL RESULTS Configuration is either merged with the existing config (similar to a **load merge** command) when running **replace: yes** or playbook fails completely when using **update: replace** or **overwrite: yes**.
True
Using junos_config to overwrite config does not work - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME junos_config module + module_utils/junos.py ##### ANSIBLE VERSION ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### CONFIGURATION Stock - no changes ##### OS / ENVIRONMENT Running on Ubuntu 4.4.0.51, but should be platform independent. ##### SUMMARY The junos_config module documentation does not allow for overwriting configuration (similar to the **load override** Junos CLI command). The module documentation states that using the **replace: yes** option will work, but is considered deprecated; and to use the **update: replace** option instead. However, neither of these keywords actually work - **replace: yes** lets the playbook run, but does not actually perform a **load override** but a **load merge**; and **update: replace** fails with an unknown parameter for the module. Digging in the module and module_utils code, it seems that the expected parameter to use is actually **overwrite: yes**, but this also fails with an unknown keyword. The module also does not seem to call the load_config function from the module_utils/junos.py with the correct arguments, causing load_config to default the overwrite variable to False on init. In addition to this, it seems that the logic in the module_utils/junos.py resource file is wrong for the overwrite clause - it sets **merge = True** and **overwrite = False**; I'm guessing this should be the other way. ##### STEPS TO REPRODUCE Run a playbook with the junos_config command to any Junos device. Include a complete config as the source and try various combinations of the parameters described above (**replace: yes**, **update: replace** and **overwrite: yes**). When running with **replace: yes**; I suggest attempting this against a switch, and trying to change the VLAN of an access port (the playbook will fail stating you can only have a single VLAN on an access port, since it's merging rather than replacing) or against a router changing an interface IP address (instead of replacing, the config will add a second IP to the interface in question). The other two cases above will fail with a parameter error. <!--- Paste example playbooks or commands between quotes below --> ``` - name: Push config to devices hosts: it-office-switches gather_facts: no tasks: - name: Installing config junos_config: host: "{{ junos_ip }}" port: 22 username: "{{ junos_user }}" password: "{{ junos_password }}" update: replace comment: "Installing baseline config via Ansible" src: "{{ output_dir }}/config.conf" src_format: text ``` ##### EXPECTED RESULTS My goal was to push a config to a Junos device and have it apply it as if I ran a **load override** command. ##### ACTUAL RESULTS Configuration is either merged with the existing config (similar to a **load merge** command) when running **replace: yes** or playbook fails completely when using **update: replace** or **overwrite: yes**.
main
using junos config to overwrite config does not work issue type bug report component name junos config module module utils junos py ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration stock no changes os environment running on ubuntu but should be platform independent summary the junos config module documentation does not allow for overwriting configuration similar to the load override junos cli command the module documentation states that using the replace yes option will work but is considered deprecated and to use the update replace option instead however neither of these keywords actually work replace yes lets the playbook run but does not actually perform a load override but a load merge and update replace fails with an unknown parameter for the module digging in the module and module utils code it seems that the expected parameter to use is actually overwrite yes but this also fails with an unknown keyword the module also does not seem to call the load config function from the module utils junos py with the correct arguments causing load config to default the overwrite variable to false on init in addition to this it seems that the logic in the module utils junos py resource file is wrong for the overwrite clause it sets merge true and overwrite false i m guessing this should be the other way steps to reproduce run a playbook with the junos config command to any junos device include a complete config as the source and try various combinations of the parameters described above replace yes update replace and overwrite yes when running with replace yes i suggest attempting this against a switch and trying to change the vlan of an access port the playbook will fail stating you can only have a single vlan on an access port since it s merging rather than replacing or against a router changing an interface ip address instead of replacing the config will add a second ip to the interface in question the other two cases above will fail with a parameter error name push config to devices hosts it office switches gather facts no tasks name installing config junos config host junos ip port username junos user password junos password update replace comment installing baseline config via ansible src output dir config conf src format text expected results my goal was to push a config to a junos device and have it apply it as if i ran a load override command actual results configuration is either merged with the existing config similar to a load merge command when running replace yes or playbook fails completely when using update replace or overwrite yes
1
58,819
14,485,674,325
IssuesEvent
2020-12-10 17:53:23
kubevirt/kubevirt
https://api.github.com/repos/kubevirt/kubevirt
closed
[Flaky CI] Networking VirtualMachineInstance with custom MAC address in non-conventional format [test_id:1772]should configure custom MAC address
kind/bug sig/network triage/build-watcher
/triage build-officer /kind bug **What happened**: **What you expected to happen**: **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: **Environment**: - KubeVirt version (use `virtctl version`): - Kubernetes version (use `kubectl version`): - VM or VMI specifications: - Cloud provider or hardware configuration: - OS (e.g. from /etc/os-release): - Kernel (e.g. `uname -a`): - Install tools: - Others: https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/4042/pull-kubevirt-e2e-k8s-1.16/1300726000866299904
1.0
[Flaky CI] Networking VirtualMachineInstance with custom MAC address in non-conventional format [test_id:1772]should configure custom MAC address - /triage build-officer /kind bug **What happened**: **What you expected to happen**: **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: **Environment**: - KubeVirt version (use `virtctl version`): - Kubernetes version (use `kubectl version`): - VM or VMI specifications: - Cloud provider or hardware configuration: - OS (e.g. from /etc/os-release): - Kernel (e.g. `uname -a`): - Install tools: - Others: https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/4042/pull-kubevirt-e2e-k8s-1.16/1300726000866299904
non_main
networking virtualmachineinstance with custom mac address in non conventional format should configure custom mac address triage build officer kind bug what happened what you expected to happen how to reproduce it as minimally and precisely as possible anything else we need to know environment kubevirt version use virtctl version kubernetes version use kubectl version vm or vmi specifications cloud provider or hardware configuration os e g from etc os release kernel e g uname a install tools others
0
1,423
6,193,943,128
IssuesEvent
2017-07-05 08:40:59
ocaml/opam-repository
https://api.github.com/repos/ocaml/opam-repository
closed
Alpine 3.5 depext broken when combining mysql/mariadb and openssl
depext needs maintainer action
The Alpine Linux 3.5 mariadb-dev package requires libressl-dev. conf-ssl in opam depends on openssl-dev for Alpine. Alpine 3.5 has both packages available but they conflict with one another, making it impossible to install OCaml bindings for MySQL/MariaDB and bindings for ssl at the same time. One potential fix for this would be to change the alpine depext to use the libressl-dev package rather then openssl-dev. Unfortunately libressl-dev is not available on Alpine pre-3.5.
True
Alpine 3.5 depext broken when combining mysql/mariadb and openssl - The Alpine Linux 3.5 mariadb-dev package requires libressl-dev. conf-ssl in opam depends on openssl-dev for Alpine. Alpine 3.5 has both packages available but they conflict with one another, making it impossible to install OCaml bindings for MySQL/MariaDB and bindings for ssl at the same time. One potential fix for this would be to change the alpine depext to use the libressl-dev package rather then openssl-dev. Unfortunately libressl-dev is not available on Alpine pre-3.5.
main
alpine depext broken when combining mysql mariadb and openssl the alpine linux mariadb dev package requires libressl dev conf ssl in opam depends on openssl dev for alpine alpine has both packages available but they conflict with one another making it impossible to install ocaml bindings for mysql mariadb and bindings for ssl at the same time one potential fix for this would be to change the alpine depext to use the libressl dev package rather then openssl dev unfortunately libressl dev is not available on alpine pre
1
162,014
12,603,911,888
IssuesEvent
2020-06-11 14:12:19
ekzyis/cryptography
https://api.github.com/repos/ekzyis/cryptography
closed
Tests take very long since using bitstring
test
Since using the bitstring module, my tests take a considerable amount of time longer: ```Ran 143 tests in 7.253s``` Guess the guy on stackoverflow was right that performance-wise, bitstring is not a great choice...
1.0
Tests take very long since using bitstring - Since using the bitstring module, my tests take a considerable amount of time longer: ```Ran 143 tests in 7.253s``` Guess the guy on stackoverflow was right that performance-wise, bitstring is not a great choice...
non_main
tests take very long since using bitstring since using the bitstring module my tests take a considerable amount of time longer ran tests in guess the guy on stackoverflow was right that performance wise bitstring is not a great choice
0
540,387
15,807,242,877
IssuesEvent
2021-04-04 09:30:07
MattTheLegoman/RealmsInExile
https://api.github.com/repos/MattTheLegoman/RealmsInExile
opened
Sauron swears fealty to Saruman even though they are both Emperors
bug priority: high
_Reported by **GasPOGne** on Discord._ > Sauron somehow swore fealty to Saruman, despite already having an Empire tier title. <details><summary>Screenshot</summary> ![image](https://user-images.githubusercontent.com/78575425/113504530-64b61700-9541-11eb-8c11-da7c4957e3da.png) </details>
1.0
Sauron swears fealty to Saruman even though they are both Emperors - _Reported by **GasPOGne** on Discord._ > Sauron somehow swore fealty to Saruman, despite already having an Empire tier title. <details><summary>Screenshot</summary> ![image](https://user-images.githubusercontent.com/78575425/113504530-64b61700-9541-11eb-8c11-da7c4957e3da.png) </details>
non_main
sauron swears fealty to saruman even though they are both emperors reported by gaspogne on discord sauron somehow swore fealty to saruman despite already having an empire tier title screenshot
0
4,759
24,525,802,918
IssuesEvent
2022-10-11 13:03:27
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
opened
Filter conditions removed if filter dialog is opened before data is loading is complete
type: enhancement work: frontend status: ready restricted: maintainers
## Steps to reproduce 1. Use the Library Management schema. 1. Go to the Publications table page. 1. Add a filter condition, specifying "Publication Year" is less than 1930. 1. Observe fewer results. Good. 1. Refresh the page, waiting for all requests to complete. 1. Open the filters dropdown and observe that the filter condition is still present. 1. Refresh the page, this time quickly opening the filter dropdown as soon as its trigger is present (but before all the data is loaded). 1. Expect to see the filter condition, still present. 1. Instead, observe that when all the data loading is complete, the filter condition is no longer present.
True
Filter conditions removed if filter dialog is opened before data is loading is complete - ## Steps to reproduce 1. Use the Library Management schema. 1. Go to the Publications table page. 1. Add a filter condition, specifying "Publication Year" is less than 1930. 1. Observe fewer results. Good. 1. Refresh the page, waiting for all requests to complete. 1. Open the filters dropdown and observe that the filter condition is still present. 1. Refresh the page, this time quickly opening the filter dropdown as soon as its trigger is present (but before all the data is loaded). 1. Expect to see the filter condition, still present. 1. Instead, observe that when all the data loading is complete, the filter condition is no longer present.
main
filter conditions removed if filter dialog is opened before data is loading is complete steps to reproduce use the library management schema go to the publications table page add a filter condition specifying publication year is less than observe fewer results good refresh the page waiting for all requests to complete open the filters dropdown and observe that the filter condition is still present refresh the page this time quickly opening the filter dropdown as soon as its trigger is present but before all the data is loaded expect to see the filter condition still present instead observe that when all the data loading is complete the filter condition is no longer present
1
47,273
5,873,959,541
IssuesEvent
2017-05-15 15:04:51
ValveSoftware/steam-for-linux
https://api.github.com/repos/ValveSoftware/steam-for-linux
closed
Swapped name in the "Installing" popup
Need Retest reviewed Steam client
I decided to install BeatHazard, and while I was doing that, activate a key from a bundle (specifically, Demigod). And when I checked both again after afk, I saw that the install window said that it was installing Demigod, even though I had made no action in that direction. Now, it didn't install Demigod (good, as it isn't a SFL game), but it confused me for a second there.
1.0
Swapped name in the "Installing" popup - I decided to install BeatHazard, and while I was doing that, activate a key from a bundle (specifically, Demigod). And when I checked both again after afk, I saw that the install window said that it was installing Demigod, even though I had made no action in that direction. Now, it didn't install Demigod (good, as it isn't a SFL game), but it confused me for a second there.
non_main
swapped name in the installing popup i decided to install beathazard and while i was doing that activate a key from a bundle specifically demigod and when i checked both again after afk i saw that the install window said that it was installing demigod even though i had made no action in that direction now it didn t install demigod good as it isn t a sfl game but it confused me for a second there
0
527,346
15,340,549,409
IssuesEvent
2021-02-27 07:39:43
AY2021S2-CS2103T-W15-3/tp
https://api.github.com/repos/AY2021S2-CS2103T-W15-3/tp
reopened
As a restaurant owner I want to remove customers
priority.high type.Story
... so that I can remove customers who no longer patronize the retaurant
1.0
As a restaurant owner I want to remove customers - ... so that I can remove customers who no longer patronize the retaurant
non_main
as a restaurant owner i want to remove customers so that i can remove customers who no longer patronize the retaurant
0
5,290
26,733,638,871
IssuesEvent
2023-01-30 07:39:30
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
Error importing a library when using the plugin
type: bug P3 lang: go product: GoLand awaiting-maintainer
Despite the libraries existing in my workspace , I keep on getting this error `Build constraints exclude all Go files in //path/to/library` . I am not sure exactly how to interpret this error correctly as I have no issues importing other libraries. This has lead to syntax highlighting being red over all my source files as the ide keeps on showing the import as invalid. To reproduce this issue locally: - Create a bazel project from this https://github.com/prysmaticlabs/prysm. - After syncing the project , open up any source file(ex : `beacon-chain/blockchain/service.go`) which imports this package: `github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1` - You will then get the error ` Build constraints exclude all Go files ` , which leads to syntax highlighting of any obejct used from that package being red as it is regarded as invalid.
True
Error importing a library when using the plugin - Despite the libraries existing in my workspace , I keep on getting this error `Build constraints exclude all Go files in //path/to/library` . I am not sure exactly how to interpret this error correctly as I have no issues importing other libraries. This has lead to syntax highlighting being red over all my source files as the ide keeps on showing the import as invalid. To reproduce this issue locally: - Create a bazel project from this https://github.com/prysmaticlabs/prysm. - After syncing the project , open up any source file(ex : `beacon-chain/blockchain/service.go`) which imports this package: `github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1` - You will then get the error ` Build constraints exclude all Go files ` , which leads to syntax highlighting of any obejct used from that package being red as it is regarded as invalid.
main
error importing a library when using the plugin despite the libraries existing in my workspace i keep on getting this error build constraints exclude all go files in path to library i am not sure exactly how to interpret this error correctly as i have no issues importing other libraries this has lead to syntax highlighting being red over all my source files as the ide keeps on showing the import as invalid to reproduce this issue locally create a bazel project from this after syncing the project open up any source file ex beacon chain blockchain service go which imports this package github com prysmaticlabs prysm proto beacon you will then get the error build constraints exclude all go files which leads to syntax highlighting of any obejct used from that package being red as it is regarded as invalid
1
1,766
6,575,023,003
IssuesEvent
2017-09-11 14:48:22
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
'azure_rm_subnet' with 'state: absent' fails when subnet was already not existing
affects_2.1 azure bug_report cloud waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_subnet ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/techraf/devops/infra-azure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Debian Jessie ##### SUMMARY `azure_rm_subnet` requires the parameter `virtual_network_name` to be provided, otherwise: ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "missing required arguments: virtual_network_name"} ``` **But** running the module with `state: absent` fails when the virtual network specified in `virtual_network_name` is non-existent. ##### STEPS TO REPRODUCE Delete the virtual_network then run the `azure_rm_subnet` task with `state: absent`. ``` - name: Ensure subnet does not exist azure_rm_subnet: resource_group: Testing name: subnet001 state: absent virtual_network_name: testvn001 ``` ##### EXPECTED RESULTS ``` ok: [localhost] ``` ##### ACTUAL RESULTS ``` TASK [Ensure subnet does not exist] ******************************************** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error deleting subnet subnet001 - The Resource 'Microsoft.Network/virtualNetworks/testvn001' under resource group 'Testing' was not found."} ```
True
'azure_rm_subnet' with 'state: absent' fails when subnet was already not existing - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME azure_rm_subnet ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /home/techraf/devops/infra-azure/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Debian Jessie ##### SUMMARY `azure_rm_subnet` requires the parameter `virtual_network_name` to be provided, otherwise: ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "missing required arguments: virtual_network_name"} ``` **But** running the module with `state: absent` fails when the virtual network specified in `virtual_network_name` is non-existent. ##### STEPS TO REPRODUCE Delete the virtual_network then run the `azure_rm_subnet` task with `state: absent`. ``` - name: Ensure subnet does not exist azure_rm_subnet: resource_group: Testing name: subnet001 state: absent virtual_network_name: testvn001 ``` ##### EXPECTED RESULTS ``` ok: [localhost] ``` ##### ACTUAL RESULTS ``` TASK [Ensure subnet does not exist] ******************************************** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error deleting subnet subnet001 - The Resource 'Microsoft.Network/virtualNetworks/testvn001' under resource group 'Testing' was not found."} ```
main
azure rm subnet with state absent fails when subnet was already not existing issue type bug report component name azure rm subnet ansible version ansible config file home techraf devops infra azure ansible cfg configured module search path default w o overrides configuration os environment debian jessie summary azure rm subnet requires the parameter virtual network name to be provided otherwise fatal failed changed false failed true msg missing required arguments virtual network name but running the module with state absent fails when the virtual network specified in virtual network name is non existent steps to reproduce delete the virtual network then run the azure rm subnet task with state absent name ensure subnet does not exist azure rm subnet resource group testing name state absent virtual network name expected results ok actual results task fatal failed changed false failed true msg error deleting subnet the resource microsoft network virtualnetworks under resource group testing was not found
1
148,664
5,694,369,150
IssuesEvent
2017-04-15 12:36:17
AlbatrossAvionics/Alba-2017
https://api.github.com/repos/AlbatrossAvionics/Alba-2017
closed
#6 mpuのリファクタリング
low priority
- 去年はmpuから値を取得することはできたが、値の安定が遅いなど不安定な面が多かった。そこで今年は去年使ったコードのリファクタリングを行い、速度向上を目指す。
1.0
#6 mpuのリファクタリング - - 去年はmpuから値を取得することはできたが、値の安定が遅いなど不安定な面が多かった。そこで今年は去年使ったコードのリファクタリングを行い、速度向上を目指す。
non_main
mpuのリファクタリング 去年はmpuから値を取得することはできたが、値の安定が遅いなど不安定な面が多かった。そこで今年は去年使ったコードのリファクタリングを行い、速度向上を目指す。
0
1,815
6,577,317,912
IssuesEvent
2017-09-12 00:04:14
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
AWS - ec2 should support unique instance criteria by VPC/Subnet
affects_2.1 aws bug_report cloud feature_idea waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report/Feature Idea ##### COMPONENT NAME Module: ec2 ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> 2.1.0.0 (but happens in older versions) ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> n/a ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> n/a ##### SUMMARY <!--- Explain the problem briefly --> ec2 module should support indempodence by VPC/Subnet ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> - Create playbook which creates an EC2 instance into a subnet (exact_count = 1) - Re-run -- notice it does not re-create the instance as expected - Modify playbook, add second EC2 instance -- same exact parameters including name but different VPC subnet - Re-run -- notice it does NOT create the new instance even though it should be located in a different subnet or VPC altogether - Modify playbook, modify second EC2 instance -- change name of the instance - Re-run -- notice it does create the new instance because of a different name altogether ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Ansible should have created a second EC2 instance EC2 should respect vpc_subnet_id as unique criteria ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Ansible did NOT create the second new instance in a different subnet because of the same parameters.
True
AWS - ec2 should support unique instance criteria by VPC/Subnet - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report/Feature Idea ##### COMPONENT NAME Module: ec2 ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> 2.1.0.0 (but happens in older versions) ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> n/a ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> n/a ##### SUMMARY <!--- Explain the problem briefly --> ec2 module should support indempodence by VPC/Subnet ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> - Create playbook which creates an EC2 instance into a subnet (exact_count = 1) - Re-run -- notice it does not re-create the instance as expected - Modify playbook, add second EC2 instance -- same exact parameters including name but different VPC subnet - Re-run -- notice it does NOT create the new instance even though it should be located in a different subnet or VPC altogether - Modify playbook, modify second EC2 instance -- change name of the instance - Re-run -- notice it does create the new instance because of a different name altogether ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Ansible should have created a second EC2 instance EC2 should respect vpc_subnet_id as unique criteria ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Ansible did NOT create the second new instance in a different subnet because of the same parameters.
main
aws should support unique instance criteria by vpc subnet issue type bug report feature idea component name module ansible version but happens in older versions configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary module should support indempodence by vpc subnet steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create playbook which creates an instance into a subnet exact count re run notice it does not re create the instance as expected modify playbook add second instance same exact parameters including name but different vpc subnet re run notice it does not create the new instance even though it should be located in a different subnet or vpc altogether modify playbook modify second instance change name of the instance re run notice it does create the new instance because of a different name altogether expected results ansible should have created a second instance should respect vpc subnet id as unique criteria actual results ansible did not create the second new instance in a different subnet because of the same parameters
1
1,060
4,876,982,824
IssuesEvent
2016-11-16 14:30:24
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
reopened
ios_facts: `dir all-filesystems | include Directory`not supported on all devices
affects_2.2 bug_report networking waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_facts ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 9fe4308670) last updated 2016/09/06 19:17:13 (GMT +1100) lib/ansible/modules/core: (detached HEAD 982c4557d2) last updated 2016/09/06 19:17:23 (GMT +1100) lib/ansible/modules/extras: (detached HEAD 06bd2a5ce2) last updated 2016/09/06 19:17:32 (GMT +1100) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT 3750 flash:c3750-advipservicesk9-mz.122-44.SE4.bin" WS-C3750-24PS-S ##### SUMMARY ogenstad: > `dir all-filesystems | include Directory` this is not a valid command on all ios devices. > If it’s used in the ios_facts module there needs to be some checks to catch those errors. > I haven’t tested the ios_facts module yet, but if you can just disable that check I’m guessing it would work. > I.e. not use `gather_subset: all` Thanks to @ben-cirrus (from networktocode Slack) for this bug report ##### STEPS TO REPRODUCE <!--- - hosts: "{{ hosts }}" any_errors_fatal: true connection: local gather_facts: no vars: cli: host: "{{ ip_addr }}" username: "{{ user }}" password: "{{ password }}" transport: cli tasks: - ios_facts: provider: "{{ cli }}" gather_subset: all [lab] labswitch ip_addr=10.254.9.11 --> ##### EXPECTED RESULTS No backtrace, facts returned ##### ACTUAL RESULTS ``` Using /etc/ansible/ansible.cfg as config file PLAYBOOK: get_ios_facts.yml **************************************************** 1 plays in get_ios_facts.yml PLAY [lab,] ******************************************************************** TASK [ios_facts] *************************************************************** task path: /root/napalm-testing/get_ios_facts.yml:14 Using module file /root/ansible/lib/ansible/modules/core/network/ios/ios_facts.py <labswitch> ESTABLISH LOCAL CONNECTION FOR USER: root <labswitch> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `" && echo ansible-tmp-1473154099.74-15933157338277="` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `" ) && sleep 0' <labswitch> PUT /tmp/tmpxzHJfd TO /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py <labswitch> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py && sleep 0' <labswitch> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_6pqI6u/ansible_module_ios_facts.py", line 455, in <module> main() File "/tmp/ansible_6pqI6u/ansible_module_ios_facts.py", line 437, in main runner.run() File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py", line 163, in run File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py", line 88, in run_commands File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py", line 66, in run_commands File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory ^ % Invalid input detected at '^' marker. NSW-CHQ-SW-LAB# fatal: [labswitch]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_name": "ios_facts" }, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\", line 455, in <module>\n main()\n File \"/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\", line 437, in main\n runner.run()\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 163, in run\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 88, in run_commands\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py\", line 66, in run_commands\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py\", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nNSW-CHQ-SW-LAB#\n", "module_stdout": "", "msg": "MODULE FAILURE" } to retry, use: --limit @get_ios_facts.retry PLAY RECAP ********************************************************************* labswitch : ok=0 changed=0 unreachable=0 failed=1 ```
True
ios_facts: `dir all-filesystems | include Directory`not supported on all devices - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_facts ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0 (devel 9fe4308670) last updated 2016/09/06 19:17:13 (GMT +1100) lib/ansible/modules/core: (detached HEAD 982c4557d2) last updated 2016/09/06 19:17:23 (GMT +1100) lib/ansible/modules/extras: (detached HEAD 06bd2a5ce2) last updated 2016/09/06 19:17:32 (GMT +1100) config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION ##### OS / ENVIRONMENT 3750 flash:c3750-advipservicesk9-mz.122-44.SE4.bin" WS-C3750-24PS-S ##### SUMMARY ogenstad: > `dir all-filesystems | include Directory` this is not a valid command on all ios devices. > If it’s used in the ios_facts module there needs to be some checks to catch those errors. > I haven’t tested the ios_facts module yet, but if you can just disable that check I’m guessing it would work. > I.e. not use `gather_subset: all` Thanks to @ben-cirrus (from networktocode Slack) for this bug report ##### STEPS TO REPRODUCE <!--- - hosts: "{{ hosts }}" any_errors_fatal: true connection: local gather_facts: no vars: cli: host: "{{ ip_addr }}" username: "{{ user }}" password: "{{ password }}" transport: cli tasks: - ios_facts: provider: "{{ cli }}" gather_subset: all [lab] labswitch ip_addr=10.254.9.11 --> ##### EXPECTED RESULTS No backtrace, facts returned ##### ACTUAL RESULTS ``` Using /etc/ansible/ansible.cfg as config file PLAYBOOK: get_ios_facts.yml **************************************************** 1 plays in get_ios_facts.yml PLAY [lab,] ******************************************************************** TASK [ios_facts] *************************************************************** task path: /root/napalm-testing/get_ios_facts.yml:14 Using module file /root/ansible/lib/ansible/modules/core/network/ios/ios_facts.py <labswitch> ESTABLISH LOCAL CONNECTION FOR USER: root <labswitch> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `" && echo ansible-tmp-1473154099.74-15933157338277="` echo $HOME/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277 `" ) && sleep 0' <labswitch> PUT /tmp/tmpxzHJfd TO /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py <labswitch> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py && sleep 0' <labswitch> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/ios_facts.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1473154099.74-15933157338277/" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_6pqI6u/ansible_module_ios_facts.py", line 455, in <module> main() File "/tmp/ansible_6pqI6u/ansible_module_ios_facts.py", line 437, in main runner.run() File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py", line 163, in run File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py", line 88, in run_commands File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py", line 66, in run_commands File "/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py", line 252, in execute ansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory ^ % Invalid input detected at '^' marker. NSW-CHQ-SW-LAB# fatal: [labswitch]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_name": "ios_facts" }, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\", line 455, in <module>\n main()\n File \"/tmp/ansible_6pqI6u/ansible_module_ios_facts.py\", line 437, in main\n runner.run()\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 163, in run\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/netcli.py\", line 88, in run_commands\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/ios.py\", line 66, in run_commands\n File \"/tmp/ansible_6pqI6u/ansible_modlib.zip/ansible/module_utils/shell.py\", line 252, in execute\nansible.module_utils.network.NetworkError: matched error in response: dir all-filesystems | include Directory\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\nNSW-CHQ-SW-LAB#\n", "module_stdout": "", "msg": "MODULE FAILURE" } to retry, use: --limit @get_ios_facts.retry PLAY RECAP ********************************************************************* labswitch : ok=0 changed=0 unreachable=0 failed=1 ```
main
ios facts dir all filesystems include directory not supported on all devices issue type bug report component name ios facts ansible version ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration os environment flash mz bin ws s summary ogenstad dir all filesystems include directory this is not a valid command on all ios devices if it’s used in the ios facts module there needs to be some checks to catch those errors i haven’t tested the ios facts module yet but if you can just disable that check i’m guessing it would work i e not use gather subset all thanks to ben cirrus from networktocode slack for this bug report steps to reproduce hosts hosts any errors fatal true connection local gather facts no vars cli host ip addr username user password password transport cli tasks ios facts provider cli gather subset all labswitch ip addr expected results no backtrace facts returned actual results using etc ansible ansible cfg as config file playbook get ios facts yml plays in get ios facts yml play task task path root napalm testing get ios facts yml using module file root ansible lib ansible modules core network ios ios facts py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpxzhjfd to root ansible tmp ansible tmp ios facts py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp ios facts py sleep exec bin sh c usr bin python root ansible tmp ansible tmp ios facts py rm rf root ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module ios facts py line in main file tmp ansible ansible module ios facts py line in main runner run file tmp ansible ansible modlib zip ansible module utils netcli py line in run file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands file tmp ansible ansible modlib zip ansible module utils ios py line in run commands file tmp ansible ansible modlib zip ansible module utils shell py line in execute ansible module utils network networkerror matched error in response dir all filesystems include directory invalid input detected at marker nsw chq sw lab fatal failed changed false failed true invocation module name ios facts module stderr traceback most recent call last n file tmp ansible ansible module ios facts py line in n main n file tmp ansible ansible module ios facts py line in main n runner run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run n file tmp ansible ansible modlib zip ansible module utils netcli py line in run commands n file tmp ansible ansible modlib zip ansible module utils ios py line in run commands n file tmp ansible ansible modlib zip ansible module utils shell py line in execute nansible module utils network networkerror matched error in response dir all filesystems include directory r n r n invalid input detected at marker r n r nnsw chq sw lab n module stdout msg module failure to retry use limit get ios facts retry play recap labswitch ok changed unreachable failed
1
1,973
6,694,171,877
IssuesEvent
2017-10-10 00:04:49
duckduckgo/zeroclickinfo-spice
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
closed
Amazon: overtriggering on currency search
Maintainer Input Requested Relevancy Triggering
This IA was triggered when I was testing a currency search for "£1,099.00," and I was expecting the conversion IA. --- IA Page: http://duck.co/ia/view/products [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @bsstoner
True
Amazon: overtriggering on currency search - This IA was triggered when I was testing a currency search for "£1,099.00," and I was expecting the conversion IA. --- IA Page: http://duck.co/ia/view/products [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @bsstoner
main
amazon overtriggering on currency search this ia was triggered when i was testing a currency search for £ and i was expecting the conversion ia ia page bsstoner
1
168,563
6,378,148,603
IssuesEvent
2017-08-02 11:59:34
zero-os/0-orchestrator
https://api.github.com/repos/zero-os/0-orchestrator
closed
Selfhealing: rotate log files
priority_major state_question state_verification type_feature
Implement in node service monitor action Find all logs known logs files and log rotate them via logrotate command if total amount of logs exceed 1GB See https://github.com/0-complexity/selfhealing/blob/master/jumpscripts/maintenance/logs_truncate.py
1.0
Selfhealing: rotate log files - Implement in node service monitor action Find all logs known logs files and log rotate them via logrotate command if total amount of logs exceed 1GB See https://github.com/0-complexity/selfhealing/blob/master/jumpscripts/maintenance/logs_truncate.py
non_main
selfhealing rotate log files implement in node service monitor action find all logs known logs files and log rotate them via logrotate command if total amount of logs exceed see
0
65,480
27,115,390,077
IssuesEvent
2023-02-15 18:09:27
microsoft/winget-pkgs
https://api.github.com/repos/microsoft/winget-pkgs
closed
P.S.A. Validation Pipeline and Bot Issues
Public-Service-Announcement fabric-bot
All, We've been encountering issues with the validation infrastructure. We're working on this with our provider. This has also been compounded by a simultaneous issue related to the bot helping apply labels and merge PRs. **Edit: 2/6** - The bot seems to be getting back to normal again. We're manually triggering builds to try and increase throughput as well as applying labels.
1.0
P.S.A. Validation Pipeline and Bot Issues - All, We've been encountering issues with the validation infrastructure. We're working on this with our provider. This has also been compounded by a simultaneous issue related to the bot helping apply labels and merge PRs. **Edit: 2/6** - The bot seems to be getting back to normal again. We're manually triggering builds to try and increase throughput as well as applying labels.
non_main
p s a validation pipeline and bot issues all we ve been encountering issues with the validation infrastructure we re working on this with our provider this has also been compounded by a simultaneous issue related to the bot helping apply labels and merge prs edit the bot seems to be getting back to normal again we re manually triggering builds to try and increase throughput as well as applying labels
0
21,789
3,555,873,173
IssuesEvent
2016-01-22 00:42:54
jacricelli/facturacionafip
https://api.github.com/repos/jacricelli/facturacionafip
closed
La aplicación no funciona sin acceso a internet
auto-migrated Priority-Low Type-Defect
``` What steps will reproduce the problem? 1. Deje a su máquina sin acceso a internet 2. intente generar un comprobante habría que capturar el error y decir que no tiene acceso a internet ``` Original issue reported on code.google.com by `abelfil...@gmail.com` on 5 Oct 2009 at 9:22
1.0
La aplicación no funciona sin acceso a internet - ``` What steps will reproduce the problem? 1. Deje a su máquina sin acceso a internet 2. intente generar un comprobante habría que capturar el error y decir que no tiene acceso a internet ``` Original issue reported on code.google.com by `abelfil...@gmail.com` on 5 Oct 2009 at 9:22
non_main
la aplicación no funciona sin acceso a internet what steps will reproduce the problem deje a su máquina sin acceso a internet intente generar un comprobante habría que capturar el error y decir que no tiene acceso a internet original issue reported on code google com by abelfil gmail com on oct at
0
68,330
9,167,611,182
IssuesEvent
2019-03-02 15:15:24
rstoneback/pysat
https://api.github.com/repos/rstoneback/pysat
closed
Documentation lacks information on sat_id functionality and use.
documentation
Some missions use multiple satellites and require an ID to differentiate. Currently, there is no documentation on how to do this properly.
1.0
Documentation lacks information on sat_id functionality and use. - Some missions use multiple satellites and require an ID to differentiate. Currently, there is no documentation on how to do this properly.
non_main
documentation lacks information on sat id functionality and use some missions use multiple satellites and require an id to differentiate currently there is no documentation on how to do this properly
0
3,137
12,043,282,447
IssuesEvent
2020-04-14 12:09:08
arcticicestudio/igloo
https://api.github.com/repos/arcticicestudio/igloo
opened
XDG application desktop file cleanup
scope-maintainability snowblock-xdg type-improvement
There are `.desktop` files for applications that are not used anymore for some time now as well as applications that don't require a user-level launcher file anymore. - `atom.desktop` — [Atom][] is not used anymore since at least February 23 2019 and has been replaced by [Visual Studio Code][vscode] in #179. - `evolution.desktop` — The usage of [Evolution][] was only temporary and for test purposes regarding the compatibility with different protocols that could be used through opt-in extensions, but [Thunderbird][] was never replaced as main mail application. - `gpick.desktop` — The main reason for a user-level launcher was the missing `MimeType` entry for `` which has already been [patched in the upstream][thezbyg/gpick-blob-gpick.desktop]. Anyway, _Gpick_ is also not used anymore since most design related applications include tools to pick colors, like e.g. the [_Firefox_'s _Eyedropper_][mdn-tools-eyedropper] or of course [GIMP][]. - `gtkhash.desktop` — [GTKHash][] is also not used anymore, the user-level launcher was introduced back then to add more keywords for supported hash algorithms. - `jetbrains-ide.desktop` — The user-level launcher was introduced because the used icon name `intellij-idea-ultimate-edition` was not provided by the used icon the (_Numix Circle_) so it was changed to the available `idea` icon. Anyway, a _symlink_ was added a long time ago in the icon theme upstream and therefore the custom launcher is not required anymore. - `org.gnome.gedit.desktop` — The user-level launcher was introduced to add more _MIME_ types that should be handled by [Gedit][], this was then resolved shortly afterwards using the correct way through [_XDG MIME_ type handling][archw-xdg_mime_apps]. - `shotwell-viewer.desktop` — The user-level launcher was introduced in order to hide the `shotwell-viewer` application, that is not intended to be called as standalone application, using the `NoDisplay` attribute. Anyway, the launcher is now [hidden by default in the upstream][gnome/shotwell-blob-shortwell-viewer] and therefore doesn't require a custom launcher anymore. [archw-xdg_mime_apps]: https://wiki.archlinux.org/index.php/XDG_MIME_Applications [atom]: https://atom.io [evolution]: https://wiki.gnome.org/Apps/Evolution [gedit]: https://wiki.gnome.org/Apps/Gedit [gimp]: https://www.gimp.org [gnome/shotwell-blob-shortwell-viewer]: https://gitlab.gnome.org/GNOME/shotwell/-/blob/ca03ce2f8e70670d43be00e9f381f9cd22afbceb/data/org.gnome.Shotwell-Viewer.desktop.in#L9 [gtkhash]: https://github.com/tristanheaven/gtkhash [mdn-tools-eyedropper]: https://developer.mozilla.org/en-US/docs/Tools/Eyedropper [thezbyg/gpick-blob-gpick.desktop]: https://github.com/thezbyg/gpick/blob/master/share/applications/gpick.desktop [thunderbird]: https://www.thunderbird.net [vscode]: https://code.visualstudio.com
True
XDG application desktop file cleanup - There are `.desktop` files for applications that are not used anymore for some time now as well as applications that don't require a user-level launcher file anymore. - `atom.desktop` — [Atom][] is not used anymore since at least February 23 2019 and has been replaced by [Visual Studio Code][vscode] in #179. - `evolution.desktop` — The usage of [Evolution][] was only temporary and for test purposes regarding the compatibility with different protocols that could be used through opt-in extensions, but [Thunderbird][] was never replaced as main mail application. - `gpick.desktop` — The main reason for a user-level launcher was the missing `MimeType` entry for `` which has already been [patched in the upstream][thezbyg/gpick-blob-gpick.desktop]. Anyway, _Gpick_ is also not used anymore since most design related applications include tools to pick colors, like e.g. the [_Firefox_'s _Eyedropper_][mdn-tools-eyedropper] or of course [GIMP][]. - `gtkhash.desktop` — [GTKHash][] is also not used anymore, the user-level launcher was introduced back then to add more keywords for supported hash algorithms. - `jetbrains-ide.desktop` — The user-level launcher was introduced because the used icon name `intellij-idea-ultimate-edition` was not provided by the used icon the (_Numix Circle_) so it was changed to the available `idea` icon. Anyway, a _symlink_ was added a long time ago in the icon theme upstream and therefore the custom launcher is not required anymore. - `org.gnome.gedit.desktop` — The user-level launcher was introduced to add more _MIME_ types that should be handled by [Gedit][], this was then resolved shortly afterwards using the correct way through [_XDG MIME_ type handling][archw-xdg_mime_apps]. - `shotwell-viewer.desktop` — The user-level launcher was introduced in order to hide the `shotwell-viewer` application, that is not intended to be called as standalone application, using the `NoDisplay` attribute. Anyway, the launcher is now [hidden by default in the upstream][gnome/shotwell-blob-shortwell-viewer] and therefore doesn't require a custom launcher anymore. [archw-xdg_mime_apps]: https://wiki.archlinux.org/index.php/XDG_MIME_Applications [atom]: https://atom.io [evolution]: https://wiki.gnome.org/Apps/Evolution [gedit]: https://wiki.gnome.org/Apps/Gedit [gimp]: https://www.gimp.org [gnome/shotwell-blob-shortwell-viewer]: https://gitlab.gnome.org/GNOME/shotwell/-/blob/ca03ce2f8e70670d43be00e9f381f9cd22afbceb/data/org.gnome.Shotwell-Viewer.desktop.in#L9 [gtkhash]: https://github.com/tristanheaven/gtkhash [mdn-tools-eyedropper]: https://developer.mozilla.org/en-US/docs/Tools/Eyedropper [thezbyg/gpick-blob-gpick.desktop]: https://github.com/thezbyg/gpick/blob/master/share/applications/gpick.desktop [thunderbird]: https://www.thunderbird.net [vscode]: https://code.visualstudio.com
main
xdg application desktop file cleanup there are desktop files for applications that are not used anymore for some time now as well as applications that don t require a user level launcher file anymore atom desktop — is not used anymore since at least february and has been replaced by in evolution desktop — the usage of was only temporary and for test purposes regarding the compatibility with different protocols that could be used through opt in extensions but was never replaced as main mail application gpick desktop — the main reason for a user level launcher was the missing mimetype entry for which has already been anyway gpick is also not used anymore since most design related applications include tools to pick colors like e g the or of course gtkhash desktop — is also not used anymore the user level launcher was introduced back then to add more keywords for supported hash algorithms jetbrains ide desktop — the user level launcher was introduced because the used icon name intellij idea ultimate edition was not provided by the used icon the numix circle so it was changed to the available idea icon anyway a symlink was added a long time ago in the icon theme upstream and therefore the custom launcher is not required anymore org gnome gedit desktop — the user level launcher was introduced to add more mime types that should be handled by this was then resolved shortly afterwards using the correct way through shotwell viewer desktop — the user level launcher was introduced in order to hide the shotwell viewer application that is not intended to be called as standalone application using the nodisplay attribute anyway the launcher is now and therefore doesn t require a custom launcher anymore
1
98,070
12,289,945,114
IssuesEvent
2020-05-10 00:30:41
mjd4686/WiFi-Router-Hyper-Simulator-X3000
https://api.github.com/repos/mjd4686/WiFi-Router-Hyper-Simulator-X3000
closed
Develop a basic level
Level Design enhancement
Lay out a basic office environment consisting of all 3 tiers of routers (including the III beacons), along with walls and many of the voxel assets we downloaded. Make an issue assigned to @JakeCalkins if you need an asset to be more "voxel-ized". This is a trivial task in Blender.
1.0
Develop a basic level - Lay out a basic office environment consisting of all 3 tiers of routers (including the III beacons), along with walls and many of the voxel assets we downloaded. Make an issue assigned to @JakeCalkins if you need an asset to be more "voxel-ized". This is a trivial task in Blender.
non_main
develop a basic level lay out a basic office environment consisting of all tiers of routers including the iii beacons along with walls and many of the voxel assets we downloaded make an issue assigned to jakecalkins if you need an asset to be more voxel ized this is a trivial task in blender
0
181,318
14,015,528,211
IssuesEvent
2020-10-29 13:26:46
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
CachedBlobContainerIndexInputTests.testRandomReads failure
:Distributed/Snapshot/Restore >test-failure Team:Distributed v7.11.0
**Build scan**: https://gradle-enterprise.elastic.co/s/tiqw5pjr4tiwk **Repro line**: ``` ./gradlew ':x-pack:plugin:searchable-snapshots:test' --tests "org.elasticsearch.index.store.cache.CachedBlobContainerIndexInputTests.testRandomReads" -Dtests.seed=9EFCEB28234D764F -Dtests.security.manager=true -Dtests.locale=en-AU -Dtests.timezone=Pacific/Auckland -Druntime.java=8 ``` **Reproduces locally?**: No. **Applicable branches**: At least 7.x, that is where it failed on #64160 **Failure history**: Looks like it only failed [once](https://build-stats.elastic.co/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:now-30d,mode:quick,to:now))&_a=(columns:!(_source),index:b646ed00-7efc-11e8-bf69-63c8ef516157,interval:auto,query:(language:lucene,query:testRandomReads),sort:!(process.time-start,desc))), though there is also a failure of the test from September 3rd (looks like an unrelated failure). **Failure excerpt**: ``` java.lang.AssertionError: All bytes should have been read from source Expected: <20522L> but: was <20115L> at __randomizedtesting.SeedInfo.seed([9EFCEB28234D764F:ABFE49DBC9E1D265]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.elasticsearch.index.store.cache.CachedBlobContainerIndexInputTests.testRandomReads(CachedBlobContainerIndexInputTests.java:150) ```
1.0
CachedBlobContainerIndexInputTests.testRandomReads failure - **Build scan**: https://gradle-enterprise.elastic.co/s/tiqw5pjr4tiwk **Repro line**: ``` ./gradlew ':x-pack:plugin:searchable-snapshots:test' --tests "org.elasticsearch.index.store.cache.CachedBlobContainerIndexInputTests.testRandomReads" -Dtests.seed=9EFCEB28234D764F -Dtests.security.manager=true -Dtests.locale=en-AU -Dtests.timezone=Pacific/Auckland -Druntime.java=8 ``` **Reproduces locally?**: No. **Applicable branches**: At least 7.x, that is where it failed on #64160 **Failure history**: Looks like it only failed [once](https://build-stats.elastic.co/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:now-30d,mode:quick,to:now))&_a=(columns:!(_source),index:b646ed00-7efc-11e8-bf69-63c8ef516157,interval:auto,query:(language:lucene,query:testRandomReads),sort:!(process.time-start,desc))), though there is also a failure of the test from September 3rd (looks like an unrelated failure). **Failure excerpt**: ``` java.lang.AssertionError: All bytes should have been read from source Expected: <20522L> but: was <20115L> at __randomizedtesting.SeedInfo.seed([9EFCEB28234D764F:ABFE49DBC9E1D265]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.elasticsearch.index.store.cache.CachedBlobContainerIndexInputTests.testRandomReads(CachedBlobContainerIndexInputTests.java:150) ```
non_main
cachedblobcontainerindexinputtests testrandomreads failure build scan repro line gradlew x pack plugin searchable snapshots test tests org elasticsearch index store cache cachedblobcontainerindexinputtests testrandomreads dtests seed dtests security manager true dtests locale en au dtests timezone pacific auckland druntime java reproduces locally no applicable branches at least x that is where it failed on failure history looks like it only failed though there is also a failure of the test from september looks like an unrelated failure failure excerpt java lang assertionerror all bytes should have been read from source expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org elasticsearch index store cache cachedblobcontainerindexinputtests testrandomreads cachedblobcontainerindexinputtests java
0
1,552
6,572,249,425
IssuesEvent
2017-09-11 00:35:53
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Unsupported parameter for module: shrink in lvol module
affects_2.1 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lvol ##### ANSIBLE VERSION ansible 2.1.0.0 ##### CONFIGURATION [ssh_connection] ssh_args = -F ssh.config -o ForwardAgent=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no scp_if_ssh = True control_path = ~/.ssh/mux-%%r@%%h:%%p pipelining = True [defaults] host_key_checking = False timeout = 30 transport = ssh ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} [localhost] localhost ansible_connection=local ##### OS / ENVIRONMENT From: Mac OS 10.10.5 To: Ubuntu 14.04 ##### SUMMARY There is a pre-existing lvol, running ansible fails with the error mentioned on the title. ##### STEPS TO REPRODUCE ``` - name: Create a logical volume the size of all remaining space in the volume group lvol: vg=test lv=test size=100%FREE shrink=no ``` ##### EXPECTED RESULTS It just works without changing the volume if it exists ##### ACTUAL RESULTS ``` TASK [lvm : Create a logical volume the size of all remaining space in the volume group] *** task path: /Users/christopher/Projects/ansible/ansible/roles/lvm/tasks/main.yml:10 <10.0.0.13> ESTABLISH SSH CONNECTION FOR USER: root <10.0.0.13> SSH: EXEC ssh -C -q -F ssh.config -o ForwardAgent=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 10.0.0.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python && sleep 0'"'"'' fatal: [REDACTED]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"lv": "mysql_vol", "shrink": "no", "size": "+100%FREE", "vg": "mysql_grp"}, "module_name": "lvol"}, "msg": "unsupported parameter for module: shrink"} ```
True
Unsupported parameter for module: shrink in lvol module - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lvol ##### ANSIBLE VERSION ansible 2.1.0.0 ##### CONFIGURATION [ssh_connection] ssh_args = -F ssh.config -o ForwardAgent=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no scp_if_ssh = True control_path = ~/.ssh/mux-%%r@%%h:%%p pipelining = True [defaults] host_key_checking = False timeout = 30 transport = ssh ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} [localhost] localhost ansible_connection=local ##### OS / ENVIRONMENT From: Mac OS 10.10.5 To: Ubuntu 14.04 ##### SUMMARY There is a pre-existing lvol, running ansible fails with the error mentioned on the title. ##### STEPS TO REPRODUCE ``` - name: Create a logical volume the size of all remaining space in the volume group lvol: vg=test lv=test size=100%FREE shrink=no ``` ##### EXPECTED RESULTS It just works without changing the volume if it exists ##### ACTUAL RESULTS ``` TASK [lvm : Create a logical volume the size of all remaining space in the volume group] *** task path: /Users/christopher/Projects/ansible/ansible/roles/lvm/tasks/main.yml:10 <10.0.0.13> ESTABLISH SSH CONNECTION FOR USER: root <10.0.0.13> SSH: EXEC ssh -C -q -F ssh.config -o ForwardAgent=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 10.0.0.13 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python && sleep 0'"'"'' fatal: [REDACTED]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"lv": "mysql_vol", "shrink": "no", "size": "+100%FREE", "vg": "mysql_grp"}, "module_name": "lvol"}, "msg": "unsupported parameter for module: shrink"} ```
main
unsupported parameter for module shrink in lvol module issue type bug report component name lvol ansible version ansible configuration ssh args f ssh config o forwardagent yes o userknownhostsfile dev null o stricthostkeychecking no scp if ssh true control path ssh mux r h p pipelining true host key checking false timeout transport ssh ansible managed ansible managed file modified on y m d h m s by uid on host localhost ansible connection local os environment from mac os to ubuntu summary there is a pre existing lvol running ansible fails with the error mentioned on the title steps to reproduce name create a logical volume the size of all remaining space in the volume group lvol vg test lv test size free shrink no expected results it just works without changing the volume if it exists actual results task task path users christopher projects ansible ansible roles lvm tasks main yml establish ssh connection for user root ssh exec ssh c q f ssh config o forwardagent yes o userknownhostsfile dev null o stricthostkeychecking no o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python sleep fatal failed changed false failed true invocation module args lv mysql vol shrink no size free vg mysql grp module name lvol msg unsupported parameter for module shrink
1
3,425
13,182,208,463
IssuesEvent
2020-08-12 15:25:47
duo-labs/cloudmapper
https://api.github.com/repos/duo-labs/cloudmapper
closed
Regarding filtering by tags
unmaintained_functionality
Hi, I have been using the Cloudmapper tool for my AWS account for a while. Recently, I have trying to use the feature of filtering by tags, but am not getting the output I expected. When I try to run the prepare script by filtering for a single tag (python3 cloudmapper.py prepare --no-read-replicas --regions us-east-1 --account aws-jdf-apps-devl --config /opt/cloudmapper/src/config.json 0 --tags component:isu ), the cloudmapper tool only recognizes the EC2's that have the exsiting tag and not any other resources like Lambda and S3. I have tried changing the **return pyjq.all(".Tags[]", self._json_blob)** lines in nodes.py file to **return pyjq.all(".Tags[]?", self._json_blob)**, but that does not fix my issue. Can someone tell me what I need to do the exisiting code in order to solve my problem?
True
Regarding filtering by tags - Hi, I have been using the Cloudmapper tool for my AWS account for a while. Recently, I have trying to use the feature of filtering by tags, but am not getting the output I expected. When I try to run the prepare script by filtering for a single tag (python3 cloudmapper.py prepare --no-read-replicas --regions us-east-1 --account aws-jdf-apps-devl --config /opt/cloudmapper/src/config.json 0 --tags component:isu ), the cloudmapper tool only recognizes the EC2's that have the exsiting tag and not any other resources like Lambda and S3. I have tried changing the **return pyjq.all(".Tags[]", self._json_blob)** lines in nodes.py file to **return pyjq.all(".Tags[]?", self._json_blob)**, but that does not fix my issue. Can someone tell me what I need to do the exisiting code in order to solve my problem?
main
regarding filtering by tags hi i have been using the cloudmapper tool for my aws account for a while recently i have trying to use the feature of filtering by tags but am not getting the output i expected when i try to run the prepare script by filtering for a single tag cloudmapper py prepare no read replicas regions us east account aws jdf apps devl config opt cloudmapper src config json tags component isu the cloudmapper tool only recognizes the s that have the exsiting tag and not any other resources like lambda and i have tried changing the return pyjq all tags self json blob lines in nodes py file to return pyjq all tags self json blob but that does not fix my issue can someone tell me what i need to do the exisiting code in order to solve my problem
1
228,425
18,216,642,031
IssuesEvent
2021-09-30 05:45:26
litmuschaos/litmus
https://api.github.com/repos/litmuschaos/litmus
closed
(chaos-exporter): Add Unit Test Cases for function contains()
good first issue Hacktoberfest area/chaos-exporter kind/unit-test
<!-- This form is for bug reports and feature requests ONLY! --> <!-- Thanks for filing an issue! Before hitting the button, please answer these questions.--> ## UNIT TEST **What happened**: - Write the unit-test **What you expected to happen**: - The test case should be written for function **contains()** - https://github.com/litmuschaos/chaos-exporter/blob/master/cmd/exporter/main.go **Anything else we need to know?**: - Issue related to Repository: https://github.com/litmuschaos/chaos-exporter - Reference:- https://github.com/litmuschaos/chaos-operator/blob/master/pkg/controller/chaosengine/chaosengine_controller_test.go
1.0
(chaos-exporter): Add Unit Test Cases for function contains() - <!-- This form is for bug reports and feature requests ONLY! --> <!-- Thanks for filing an issue! Before hitting the button, please answer these questions.--> ## UNIT TEST **What happened**: - Write the unit-test **What you expected to happen**: - The test case should be written for function **contains()** - https://github.com/litmuschaos/chaos-exporter/blob/master/cmd/exporter/main.go **Anything else we need to know?**: - Issue related to Repository: https://github.com/litmuschaos/chaos-exporter - Reference:- https://github.com/litmuschaos/chaos-operator/blob/master/pkg/controller/chaosengine/chaosengine_controller_test.go
non_main
chaos exporter add unit test cases for function contains unit test what happened write the unit test what you expected to happen the test case should be written for function contains anything else we need to know issue related to repository reference
0
5,469
19,685,600,029
IssuesEvent
2022-01-11 21:44:45
bcgov/api-services-portal
https://api.github.com/repos/bcgov/api-services-portal
closed
Client Credential Test: test API with client ID and secret
automation
Need to test that the client credential flow is actually applied to the product and functions as expected. **Test Steps** 1. Make API call using client credentials and secret (TODO: elaborate instructions)
1.0
Client Credential Test: test API with client ID and secret - Need to test that the client credential flow is actually applied to the product and functions as expected. **Test Steps** 1. Make API call using client credentials and secret (TODO: elaborate instructions)
non_main
client credential test test api with client id and secret need to test that the client credential flow is actually applied to the product and functions as expected test steps make api call using client credentials and secret todo elaborate instructions
0
4,360
22,056,632,403
IssuesEvent
2022-05-30 13:27:04
Homebrew/homebrew-core
https://api.github.com/repos/Homebrew/homebrew-core
closed
[Regression] libaom/aomenc segfaults when aq-mode is 2 and tiles are not square
bug help wanted maintainer feedback
### `brew gist-logs <formula>` link OR `brew config` AND `brew doctor` output ```shell N/A this issue is to notify Homebrew project of a regression in 3.1.2 and some earlier versions of libaom/aomenc. ArchLinux Bug Report: https://bugs.archlinux.org/task/71800 Potential Fixes: git cherry-pick -n 31257f59a1df72cbbd1399efb780d13a0e433b16 Or the following patch can be applied to 3.1.2: https://bugs.archlinux.org/task/71800?getfile=20619 ``` ### - [X] I ran `brew update` and am still able to reproduce my issue. - [X] I have resolved all warnings from `brew doctor` and that did not fix my problem. ### What were you trying to do (and why)? Encode a av1 video with aq-mode 2 and non-square tiles ### What happened (include all command output)? libaom segfaults. ### What did you expect to happen? The video should encode without the encoder crashing. ### Step-by-step reproduction instructions (by running `brew` commands) ```shell brew install aom encode a video with non-square tiles and aq-mode set to 2. ```
True
[Regression] libaom/aomenc segfaults when aq-mode is 2 and tiles are not square - ### `brew gist-logs <formula>` link OR `brew config` AND `brew doctor` output ```shell N/A this issue is to notify Homebrew project of a regression in 3.1.2 and some earlier versions of libaom/aomenc. ArchLinux Bug Report: https://bugs.archlinux.org/task/71800 Potential Fixes: git cherry-pick -n 31257f59a1df72cbbd1399efb780d13a0e433b16 Or the following patch can be applied to 3.1.2: https://bugs.archlinux.org/task/71800?getfile=20619 ``` ### - [X] I ran `brew update` and am still able to reproduce my issue. - [X] I have resolved all warnings from `brew doctor` and that did not fix my problem. ### What were you trying to do (and why)? Encode a av1 video with aq-mode 2 and non-square tiles ### What happened (include all command output)? libaom segfaults. ### What did you expect to happen? The video should encode without the encoder crashing. ### Step-by-step reproduction instructions (by running `brew` commands) ```shell brew install aom encode a video with non-square tiles and aq-mode set to 2. ```
main
libaom aomenc segfaults when aq mode is and tiles are not square brew gist logs link or brew config and brew doctor output shell n a this issue is to notify homebrew project of a regression in and some earlier versions of libaom aomenc archlinux bug report potential fixes git cherry pick n or the following patch can be applied to i ran brew update and am still able to reproduce my issue i have resolved all warnings from brew doctor and that did not fix my problem what were you trying to do and why encode a video with aq mode and non square tiles what happened include all command output libaom segfaults what did you expect to happen the video should encode without the encoder crashing step by step reproduction instructions by running brew commands shell brew install aom encode a video with non square tiles and aq mode set to
1
211,220
23,805,525,528
IssuesEvent
2022-09-04 01:03:20
rgordon95/advanced-react-demo
https://api.github.com/repos/rgordon95/advanced-react-demo
opened
WS-2022-0280 (Medium) detected in moment-timezone-0.5.25.tgz
security vulnerability
## WS-2022-0280 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-timezone-0.5.25.tgz</b></p></summary> <p>Parse and display moments in any timezone.</p> <p>Library home page: <a href="https://registry.npmjs.org/moment-timezone/-/moment-timezone-0.5.25.tgz">https://registry.npmjs.org/moment-timezone/-/moment-timezone-0.5.25.tgz</a></p> <p>Path to dependency file: /advanced-react-demo/package.json</p> <p>Path to vulnerable library: /node_modules/moment-timezone/package.json</p> <p> Dependency Hierarchy: - pm2-2.5.0.tgz (Root Library) - cron-1.2.1.tgz - :x: **moment-timezone-0.5.25.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Command Injection in moment-timezone before 0.5.35. <p>Publish Date: 2022-08-30 <p>URL: <a href=https://github.com/moment/moment-timezone/commit/ce955a301ff372e8e9fb3a5b516620c60e7a082a>WS-2022-0280</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-56x4-j7p9-fcf9">https://github.com/advisories/GHSA-56x4-j7p9-fcf9</a></p> <p>Release Date: 2022-08-30</p> <p>Fix Resolution: moment-timezone - 0.5.35</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2022-0280 (Medium) detected in moment-timezone-0.5.25.tgz - ## WS-2022-0280 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-timezone-0.5.25.tgz</b></p></summary> <p>Parse and display moments in any timezone.</p> <p>Library home page: <a href="https://registry.npmjs.org/moment-timezone/-/moment-timezone-0.5.25.tgz">https://registry.npmjs.org/moment-timezone/-/moment-timezone-0.5.25.tgz</a></p> <p>Path to dependency file: /advanced-react-demo/package.json</p> <p>Path to vulnerable library: /node_modules/moment-timezone/package.json</p> <p> Dependency Hierarchy: - pm2-2.5.0.tgz (Root Library) - cron-1.2.1.tgz - :x: **moment-timezone-0.5.25.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Command Injection in moment-timezone before 0.5.35. <p>Publish Date: 2022-08-30 <p>URL: <a href=https://github.com/moment/moment-timezone/commit/ce955a301ff372e8e9fb3a5b516620c60e7a082a>WS-2022-0280</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-56x4-j7p9-fcf9">https://github.com/advisories/GHSA-56x4-j7p9-fcf9</a></p> <p>Release Date: 2022-08-30</p> <p>Fix Resolution: moment-timezone - 0.5.35</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
ws medium detected in moment timezone tgz ws medium severity vulnerability vulnerable library moment timezone tgz parse and display moments in any timezone library home page a href path to dependency file advanced react demo package json path to vulnerable library node modules moment timezone package json dependency hierarchy tgz root library cron tgz x moment timezone tgz vulnerable library vulnerability details command injection in moment timezone before publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment timezone step up your open source security game with mend
0
242,050
7,837,193,909
IssuesEvent
2018-06-18 04:10:45
ankidroid/Anki-Android
https://api.github.com/repos/ankidroid/Anki-Android
closed
"Sort Field" in browser incorrectly converts values like "3E2" into "300"
Priority-Medium accepted bug
Originally reported on Google Code with ID 1757 ``` What steps will reproduce the problem? 1. Front card: type 3E1, 3E2 or any like these 2. Goto list view 3. 3E1, 3E2, etc. are not shown, but only its math. interpretation like 30, 30000.... --> 3E1 is an acupuncture point in German, in English called TE1 (Triple Heater 1) What is the expected output? What do you see instead? expected: 3E1; output: 30 --> can't be searched Does it happen again every time you repeat the steps above? Or did it happen only one time? everytime What version of AnkiDroid are you using? (Decks list > menu > About > Look at the title) On what version of Android? (Home screen > menu > About phone > Android version) --> ver 2.3.6; Samsung Galaxy Advance Please provide any additional information below. In Anki for Mac everything is shown correctly! ``` Reported by `m.gasperl` on 2013-05-10 08:24:06
1.0
"Sort Field" in browser incorrectly converts values like "3E2" into "300" - Originally reported on Google Code with ID 1757 ``` What steps will reproduce the problem? 1. Front card: type 3E1, 3E2 or any like these 2. Goto list view 3. 3E1, 3E2, etc. are not shown, but only its math. interpretation like 30, 30000.... --> 3E1 is an acupuncture point in German, in English called TE1 (Triple Heater 1) What is the expected output? What do you see instead? expected: 3E1; output: 30 --> can't be searched Does it happen again every time you repeat the steps above? Or did it happen only one time? everytime What version of AnkiDroid are you using? (Decks list > menu > About > Look at the title) On what version of Android? (Home screen > menu > About phone > Android version) --> ver 2.3.6; Samsung Galaxy Advance Please provide any additional information below. In Anki for Mac everything is shown correctly! ``` Reported by `m.gasperl` on 2013-05-10 08:24:06
non_main
sort field in browser incorrectly converts values like into originally reported on google code with id what steps will reproduce the problem front card type or any like these goto list view etc are not shown but only its math interpretation like is an acupuncture point in german in english called triple heater what is the expected output what do you see instead expected output can t be searched does it happen again every time you repeat the steps above or did it happen only one time everytime what version of ankidroid are you using decks list menu about look at the title on what version of android home screen menu about phone android version ver samsung galaxy advance please provide any additional information below in anki for mac everything is shown correctly reported by m gasperl on
0
223
2,891,208,624
IssuesEvent
2015-06-15 01:51:16
gama-platform/gama
https://api.github.com/repos/gama-platform/gama
closed
create species from: number; GAMA behavior has changed from 1.6.1 to 1.7
> Bug Affects Maintainability Affects Usability Concerns GAML OS All Version Git
In a model, I have: create people from: 1; I am aware this is a missuse of the create statement. Nevertheless, in GAMA 1.6.1, this statement created a people agent. In GAMA 1.7.1, this does not create anything. A warning or exception should be used to warn the modeler of this behavior. Benoit
True
create species from: number; GAMA behavior has changed from 1.6.1 to 1.7 - In a model, I have: create people from: 1; I am aware this is a missuse of the create statement. Nevertheless, in GAMA 1.6.1, this statement created a people agent. In GAMA 1.7.1, this does not create anything. A warning or exception should be used to warn the modeler of this behavior. Benoit
main
create species from number gama behavior has changed from to in a model i have create people from i am aware this is a missuse of the create statement nevertheless in gama this statement created a people agent in gama this does not create anything a warning or exception should be used to warn the modeler of this behavior benoit
1
200,475
22,781,830,154
IssuesEvent
2022-07-08 20:49:12
billmcchesney1/hadoop
https://api.github.com/repos/billmcchesney1/hadoop
opened
CVE-2022-33980 (Medium) detected in commons-configuration2-2.1.1.jar
security vulnerability
## CVE-2022-33980 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-configuration2-2.1.1.jar</b></p></summary> <p>Tools to assist in the reading of configuration/preferences files in various formats</p> <p>Library home page: <a href="http://commons.apache.org/proper/commons-configuration/">http://commons.apache.org/proper/commons-configuration/</a></p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/target/lib/commons-configuration2-2.1.1.jar</p> <p> Dependency Hierarchy: - :x: **commons-configuration2-2.1.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p> <p>Found in base branch: <b>trunk</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Apache Commons Configuration performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.configuration2.interpol.Lookup that performs the interpolation. Starting with version 2.4 and continuing through 2.7, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Configuration 2.8.0, which disables the problematic interpolators by default. <p>Publish Date: 2022-07-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33980>CVE-2022-33980</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread/tdf5n7j80lfxdhs2764vn0xmpfodm87s">https://lists.apache.org/thread/tdf5n7j80lfxdhs2764vn0xmpfodm87s</a></p> <p>Release Date: 2022-07-06</p> <p>Fix Resolution: org.apache.commons:commons-configuration2:2.8.0</p> </p> </details> <p></p>
True
CVE-2022-33980 (Medium) detected in commons-configuration2-2.1.1.jar - ## CVE-2022-33980 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-configuration2-2.1.1.jar</b></p></summary> <p>Tools to assist in the reading of configuration/preferences files in various formats</p> <p>Library home page: <a href="http://commons.apache.org/proper/commons-configuration/">http://commons.apache.org/proper/commons-configuration/</a></p> <p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-common/target/lib/commons-configuration2-2.1.1.jar</p> <p> Dependency Hierarchy: - :x: **commons-configuration2-2.1.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p> <p>Found in base branch: <b>trunk</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Apache Commons Configuration performs variable interpolation, allowing properties to be dynamically evaluated and expanded. The standard format for interpolation is "${prefix:name}", where "prefix" is used to locate an instance of org.apache.commons.configuration2.interpol.Lookup that performs the interpolation. Starting with version 2.4 and continuing through 2.7, the set of default Lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers. These lookups are: - "script" - execute expressions using the JVM script execution engine (javax.script) - "dns" - resolve dns records - "url" - load values from urls, including from remote servers Applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used. Users are recommended to upgrade to Apache Commons Configuration 2.8.0, which disables the problematic interpolators by default. <p>Publish Date: 2022-07-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33980>CVE-2022-33980</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread/tdf5n7j80lfxdhs2764vn0xmpfodm87s">https://lists.apache.org/thread/tdf5n7j80lfxdhs2764vn0xmpfodm87s</a></p> <p>Release Date: 2022-07-06</p> <p>Fix Resolution: org.apache.commons:commons-configuration2:2.8.0</p> </p> </details> <p></p>
non_main
cve medium detected in commons jar cve medium severity vulnerability vulnerable library commons jar tools to assist in the reading of configuration preferences files in various formats library home page a href path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn server hadoop yarn server timelineservice hbase hadoop yarn server timelineservice hbase common target lib commons jar dependency hierarchy x commons jar vulnerable library found in head commit a href found in base branch trunk vulnerability details apache commons configuration performs variable interpolation allowing properties to be dynamically evaluated and expanded the standard format for interpolation is prefix name where prefix is used to locate an instance of org apache commons interpol lookup that performs the interpolation starting with version and continuing through the set of default lookup instances included interpolators that could result in arbitrary code execution or contact with remote servers these lookups are script execute expressions using the jvm script execution engine javax script dns resolve dns records url load values from urls including from remote servers applications using the interpolation defaults in the affected versions may be vulnerable to remote code execution or unintentional contact with remote servers if untrusted configuration values are used users are recommended to upgrade to apache commons configuration which disables the problematic interpolators by default publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons
0
3,081
11,694,741,250
IssuesEvent
2020-03-06 05:15:11
short-d/short
https://api.github.com/repos/short-d/short
closed
[Refactor] Remove KeyGenerator interface
Go maintainability
**What is the problem?** Currently NewKey() under KeyGenerator is abstracted out through interface. **Your solution** Replace the unnecessary interface with concrete implementation.
True
[Refactor] Remove KeyGenerator interface - **What is the problem?** Currently NewKey() under KeyGenerator is abstracted out through interface. **Your solution** Replace the unnecessary interface with concrete implementation.
main
remove keygenerator interface what is the problem currently newkey under keygenerator is abstracted out through interface your solution replace the unnecessary interface with concrete implementation
1
385,215
26,624,657,884
IssuesEvent
2023-01-24 13:47:05
gogins/csound-extended-node
https://api.github.com/repos/gogins/csound-extended-node
closed
Universal binary for macOS
documentation enhancement
This is done by using cmake-js and the npm way. I need to clean it up and document it.
1.0
Universal binary for macOS - This is done by using cmake-js and the npm way. I need to clean it up and document it.
non_main
universal binary for macos this is done by using cmake js and the npm way i need to clean it up and document it
0
157,375
24,661,349,068
IssuesEvent
2022-10-18 06:57:29
tj-heat/signed-explorations
https://api.github.com/repos/tj-heat/signed-explorations
closed
End screen design
designer
Discovery: System status for when puzzle is complete. Companion will need to provide feedback (e.g. he will pull a level, do a ritual?). Game will need to provide feedback for level complete, think Mario touching a star; sound visuals etc.)
1.0
End screen design - Discovery: System status for when puzzle is complete. Companion will need to provide feedback (e.g. he will pull a level, do a ritual?). Game will need to provide feedback for level complete, think Mario touching a star; sound visuals etc.)
non_main
end screen design discovery system status for when puzzle is complete companion will need to provide feedback e g he will pull a level do a ritual game will need to provide feedback for level complete think mario touching a star sound visuals etc
0
34,387
6,329,005,135
IssuesEvent
2017-07-26 01:00:03
test-kitchen/test-kitchen
https://api.github.com/repos/test-kitchen/test-kitchen
closed
Reference-Style Documentation for Kitchen file
Documentation
I often wish I could give someone a URL, or simply browse a page myself, to a _reference_ for the .kitchen.yml . Not a guide or tutorial, but an exhaustive guide to what options are permitted or expected. I've been told much of this information is available from 'kitchen diagnose', I'm looking to have it accessible on the web. Granted, plugins may extend what may be permitted in many places.
1.0
Reference-Style Documentation for Kitchen file - I often wish I could give someone a URL, or simply browse a page myself, to a _reference_ for the .kitchen.yml . Not a guide or tutorial, but an exhaustive guide to what options are permitted or expected. I've been told much of this information is available from 'kitchen diagnose', I'm looking to have it accessible on the web. Granted, plugins may extend what may be permitted in many places.
non_main
reference style documentation for kitchen file i often wish i could give someone a url or simply browse a page myself to a reference for the kitchen yml not a guide or tutorial but an exhaustive guide to what options are permitted or expected i ve been told much of this information is available from kitchen diagnose i m looking to have it accessible on the web granted plugins may extend what may be permitted in many places
0
53,413
7,838,191,565
IssuesEvent
2018-06-18 09:22:57
CartoDB/carto-vl
https://api.github.com/repos/CartoDB/carto-vl
opened
Document BaseExpression.eval
documentation
The `eval` method is key to make legend making easy for developers.
1.0
Document BaseExpression.eval - The `eval` method is key to make legend making easy for developers.
non_main
document baseexpression eval the eval method is key to make legend making easy for developers
0
582,517
17,363,014,968
IssuesEvent
2021-07-30 00:40:25
googleapis/python-spanner
https://api.github.com/repos/googleapis/python-spanner
reopened
tests.system.test_system_dbapi.TestTransactionsManagement: test_rollback_on_connection_closing failed
api: spanner flakybot: flaky flakybot: issue priority: p1 type: process
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 2487800e31842a44dcc37937c325e130c8c926b0 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/306a2e02-87cb-4be9-be31-37456ec7a8a2), [Sponge](http://sponge2/306a2e02-87cb-4be9-be31-37456ec7a8a2) status: failed <details><summary>Test output</summary><br><pre>args = (parent: "projects/precise-truck-742" instance_id: "google-cloud-1627550679627" instance { name: "projects/precise-t...1627550946" } labels { key: "python-spanner-dbapi-systests" value: "true" } processing_units: 1000 } ,) kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-1627550679627'), ('x...ms', 'parent=projects/precise-truck-742'), ('x-goog-api-client', 'gl-python/3.8.6 grpc/1.39.0 gax/1.31.1 gccl/3.6.0')]} @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: > return callable_(*args, **kwargs) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:67: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7fe7942014f0> request = parent: "projects/precise-truck-742" instance_id: "google-cloud-1627550679627" instance { name: "projects/precise-tr... "1627550946" } labels { key: "python-spanner-dbapi-systests" value: "true" } processing_units: 1000 } timeout = None metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-1627550679627'), ('x-goog-request...ams', 'parent=projects/precise-truck-742'), ('x-goog-api-client', 'gl-python/3.8.6 grpc/1.39.0 gax/1.31.1 gccl/3.6.0')] credentials = None, wait_for_ready = None, compression = None def __call__(self, request, timeout=None, metadata=None, credentials=None, wait_for_ready=None, compression=None): state, call, = self._blocking(request, timeout, metadata, credentials, wait_for_ready, compression) > return _end_unary_response_blocking(state, call, False, None) .nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:946: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ state = <grpc._channel._RPCState object at 0x7fe78e5b66d0> call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fe79445bc00> with_call = False, deadline = None def _end_unary_response_blocking(state, call, with_call, deadline): if state.code is grpc.StatusCode.OK: if with_call: rendezvous = _MultiThreadedRendezvous(state, call, None, deadline) return state.response, rendezvous else: return state.response else: > raise _InactiveRpcError(state) E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: E status = StatusCode.ALREADY_EXISTS E details = "Instance already exists: projects/precise-truck-742/instances/google-cloud-1627550679627" E debug_error_string = "{"created":"@1627550946.587652143","description":"Error received from peer ipv4:74.125.195.95:443","file":"src/core/lib/surface/call.cc","file_line":1069,"grpc_message":"Instance already exists: projects/precise-truck-742/instances/google-cloud-1627550679627","grpc_status":6}" E > .nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:849: _InactiveRpcError The above exception was the direct cause of the following exception: def setUpModule(): if USE_EMULATOR: from google.auth.credentials import AnonymousCredentials emulator_project = os.getenv("GCLOUD_PROJECT", "emulator-test-project") Config.CLIENT = Client( project=emulator_project, credentials=AnonymousCredentials() ) else: Config.CLIENT = Client() retry = RetryErrors(exceptions.ServiceUnavailable) configs = list(retry(Config.CLIENT.list_instance_configs)()) instances = retry(_list_instances)() EXISTING_INSTANCES[:] = instances # Delete test instances that are older than an hour. cutoff = int(time.time()) - 1 * 60 * 60 for instance_pb in Config.CLIENT.list_instances( "labels.python-spanner-dbapi-systests:true" ): instance = Instance.from_pb(instance_pb, Config.CLIENT) if "created" not in instance.labels: continue create_time = int(instance.labels["created"]) if create_time > cutoff: continue # Instance cannot be deleted while backups exist. for backup_pb in instance.list_backups(): backup = Backup.from_pb(backup_pb, instance) backup.delete() instance.delete() if CREATE_INSTANCE: if not USE_EMULATOR: # Defend against back-end returning configs for regions we aren't # actually allowed to use. configs = [config for config in configs if "-us-" in config.name] if not configs: raise ValueError("List instance configs failed in module set up.") Config.INSTANCE_CONFIG = configs[0] config_name = configs[0].name create_time = str(int(time.time())) labels = {"python-spanner-dbapi-systests": "true", "created": create_time} Config.INSTANCE = Config.CLIENT.instance( INSTANCE_ID, config_name, labels=labels ) > created_op = Config.INSTANCE.create() tests/system/test_system_dbapi.py:98: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/spanner_v1/instance.py:318: in create future = api.create_instance( google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py:829: in create_instance response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:145: in __call__ return wrapped_func(*args, **kwargs) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None from_value = <_InactiveRpcError of RPC that terminated with: status = StatusCode.ALREADY_EXISTS details = "Instance already exist...message":"Instance already exists: projects/precise-truck-742/instances/google-cloud-1627550679627","grpc_status":6}" > > ??? E google.api_core.exceptions.AlreadyExists: 409 Instance already exists: projects/precise-truck-742/instances/google-cloud-1627550679627 <string>:3: AlreadyExists</pre></details>
1.0
tests.system.test_system_dbapi.TestTransactionsManagement: test_rollback_on_connection_closing failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 2487800e31842a44dcc37937c325e130c8c926b0 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/306a2e02-87cb-4be9-be31-37456ec7a8a2), [Sponge](http://sponge2/306a2e02-87cb-4be9-be31-37456ec7a8a2) status: failed <details><summary>Test output</summary><br><pre>args = (parent: "projects/precise-truck-742" instance_id: "google-cloud-1627550679627" instance { name: "projects/precise-t...1627550946" } labels { key: "python-spanner-dbapi-systests" value: "true" } processing_units: 1000 } ,) kwargs = {'metadata': [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-1627550679627'), ('x...ms', 'parent=projects/precise-truck-742'), ('x-goog-api-client', 'gl-python/3.8.6 grpc/1.39.0 gax/1.31.1 gccl/3.6.0')]} @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: > return callable_(*args, **kwargs) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:67: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7fe7942014f0> request = parent: "projects/precise-truck-742" instance_id: "google-cloud-1627550679627" instance { name: "projects/precise-tr... "1627550946" } labels { key: "python-spanner-dbapi-systests" value: "true" } processing_units: 1000 } timeout = None metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-1627550679627'), ('x-goog-request...ams', 'parent=projects/precise-truck-742'), ('x-goog-api-client', 'gl-python/3.8.6 grpc/1.39.0 gax/1.31.1 gccl/3.6.0')] credentials = None, wait_for_ready = None, compression = None def __call__(self, request, timeout=None, metadata=None, credentials=None, wait_for_ready=None, compression=None): state, call, = self._blocking(request, timeout, metadata, credentials, wait_for_ready, compression) > return _end_unary_response_blocking(state, call, False, None) .nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:946: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ state = <grpc._channel._RPCState object at 0x7fe78e5b66d0> call = <grpc._cython.cygrpc.SegregatedCall object at 0x7fe79445bc00> with_call = False, deadline = None def _end_unary_response_blocking(state, call, with_call, deadline): if state.code is grpc.StatusCode.OK: if with_call: rendezvous = _MultiThreadedRendezvous(state, call, None, deadline) return state.response, rendezvous else: return state.response else: > raise _InactiveRpcError(state) E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: E status = StatusCode.ALREADY_EXISTS E details = "Instance already exists: projects/precise-truck-742/instances/google-cloud-1627550679627" E debug_error_string = "{"created":"@1627550946.587652143","description":"Error received from peer ipv4:74.125.195.95:443","file":"src/core/lib/surface/call.cc","file_line":1069,"grpc_message":"Instance already exists: projects/precise-truck-742/instances/google-cloud-1627550679627","grpc_status":6}" E > .nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:849: _InactiveRpcError The above exception was the direct cause of the following exception: def setUpModule(): if USE_EMULATOR: from google.auth.credentials import AnonymousCredentials emulator_project = os.getenv("GCLOUD_PROJECT", "emulator-test-project") Config.CLIENT = Client( project=emulator_project, credentials=AnonymousCredentials() ) else: Config.CLIENT = Client() retry = RetryErrors(exceptions.ServiceUnavailable) configs = list(retry(Config.CLIENT.list_instance_configs)()) instances = retry(_list_instances)() EXISTING_INSTANCES[:] = instances # Delete test instances that are older than an hour. cutoff = int(time.time()) - 1 * 60 * 60 for instance_pb in Config.CLIENT.list_instances( "labels.python-spanner-dbapi-systests:true" ): instance = Instance.from_pb(instance_pb, Config.CLIENT) if "created" not in instance.labels: continue create_time = int(instance.labels["created"]) if create_time > cutoff: continue # Instance cannot be deleted while backups exist. for backup_pb in instance.list_backups(): backup = Backup.from_pb(backup_pb, instance) backup.delete() instance.delete() if CREATE_INSTANCE: if not USE_EMULATOR: # Defend against back-end returning configs for regions we aren't # actually allowed to use. configs = [config for config in configs if "-us-" in config.name] if not configs: raise ValueError("List instance configs failed in module set up.") Config.INSTANCE_CONFIG = configs[0] config_name = configs[0].name create_time = str(int(time.time())) labels = {"python-spanner-dbapi-systests": "true", "created": create_time} Config.INSTANCE = Config.CLIENT.instance( INSTANCE_ID, config_name, labels=labels ) > created_op = Config.INSTANCE.create() tests/system/test_system_dbapi.py:98: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/spanner_v1/instance.py:318: in create future = api.create_instance( google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py:829: in create_instance response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:145: in __call__ return wrapped_func(*args, **kwargs) .nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None from_value = <_InactiveRpcError of RPC that terminated with: status = StatusCode.ALREADY_EXISTS details = "Instance already exist...message":"Instance already exists: projects/precise-truck-742/instances/google-cloud-1627550679627","grpc_status":6}" > > ??? E google.api_core.exceptions.AlreadyExists: 409 Instance already exists: projects/precise-truck-742/instances/google-cloud-1627550679627 <string>:3: AlreadyExists</pre></details>
non_main
tests system test system dbapi testtransactionsmanagement test rollback on connection closing failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output args parent projects precise truck instance id google cloud instance name projects precise t labels key python spanner dbapi systests value true processing units kwargs metadata six wraps callable def error remapped callable args kwargs try return callable args kwargs nox system lib site packages google api core grpc helpers py self request parent projects precise truck instance id google cloud instance name projects precise tr labels key python spanner dbapi systests value true processing units timeout none metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox system lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous multithreadedrendezvous state call none deadline return state response rendezvous else return state response else raise inactiverpcerror state e grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with e status statuscode already exists e details instance already exists projects precise truck instances google cloud e debug error string created description error received from peer file src core lib surface call cc file line grpc message instance already exists projects precise truck instances google cloud grpc status e nox system lib site packages grpc channel py inactiverpcerror the above exception was the direct cause of the following exception def setupmodule if use emulator from google auth credentials import anonymouscredentials emulator project os getenv gcloud project emulator test project config client client project emulator project credentials anonymouscredentials else config client client retry retryerrors exceptions serviceunavailable configs list retry config client list instance configs instances retry list instances existing instances instances delete test instances that are older than an hour cutoff int time time for instance pb in config client list instances labels python spanner dbapi systests true instance instance from pb instance pb config client if created not in instance labels continue create time int instance labels if create time cutoff continue instance cannot be deleted while backups exist for backup pb in instance list backups backup backup from pb backup pb instance backup delete instance delete if create instance if not use emulator defend against back end returning configs for regions we aren t actually allowed to use configs if not configs raise valueerror list instance configs failed in module set up config instance config configs config name configs name create time str int time time labels python spanner dbapi systests true created create time config instance config client instance instance id config name labels labels created op config instance create tests system test system dbapi py google cloud spanner instance py in create future api create instance google cloud spanner admin instance services instance admin client py in create instance response rpc request retry retry timeout timeout metadata metadata nox system lib site packages google api core gapic method py in call return wrapped func args kwargs nox system lib site packages google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value none from value inactiverpcerror of rpc that terminated with status statuscode already exists details instance already exist message instance already exists projects precise truck instances google cloud grpc status e google api core exceptions alreadyexists instance already exists projects precise truck instances google cloud alreadyexists
0
4,675
24,168,788,527
IssuesEvent
2022-09-22 17:14:52
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Data Explorer frontend - Demo video readiness
work: frontend status: ready restricted: maintainers type: meta
The following checklist is a set of bugs and enhancements for Data Explorer's demo video readiness - [x] Use RPC style endpoint, add Save button - [x] Decide and implement default summarization when user navigates from grouped table result to Data Explorer, showing record summaries - [x] Default summarization should show count Related meeting notes: [Record summaries in data explorer](https://wiki.mathesar.org/en/meeting-notes/2022-09#h-2022-09-02-record-summaries-in-data-explorer)
True
Data Explorer frontend - Demo video readiness - The following checklist is a set of bugs and enhancements for Data Explorer's demo video readiness - [x] Use RPC style endpoint, add Save button - [x] Decide and implement default summarization when user navigates from grouped table result to Data Explorer, showing record summaries - [x] Default summarization should show count Related meeting notes: [Record summaries in data explorer](https://wiki.mathesar.org/en/meeting-notes/2022-09#h-2022-09-02-record-summaries-in-data-explorer)
main
data explorer frontend demo video readiness the following checklist is a set of bugs and enhancements for data explorer s demo video readiness use rpc style endpoint add save button decide and implement default summarization when user navigates from grouped table result to data explorer showing record summaries default summarization should show count related meeting notes
1
3,226
12,368,706,053
IssuesEvent
2020-05-18 14:13:28
Kashdeya/Tiny-Progressions
https://api.github.com/repos/Kashdeya/Tiny-Progressions
closed
{Exploit} Vanilla Armor Stand Interaction
Version not Maintainted bug
Confirmed on latest version with only this mod installed. Swapping full armor set with dragon armor set allows for creative flight without set being worn. Very cheaty.
True
{Exploit} Vanilla Armor Stand Interaction - Confirmed on latest version with only this mod installed. Swapping full armor set with dragon armor set allows for creative flight without set being worn. Very cheaty.
main
exploit vanilla armor stand interaction confirmed on latest version with only this mod installed swapping full armor set with dragon armor set allows for creative flight without set being worn very cheaty
1
453
3,385,358,186
IssuesEvent
2015-11-27 11:01:40
openETCS/toolchain
https://api.github.com/repos/openETCS/toolchain
closed
Review of tracability Architecture (ends 12-Nov-2015)
US-Traceabiliy-Architecture
Here my comments on the document linked to #504 - § 1.1 and Fig 2: - in the figure are mixed functionnal, HW, procedural,... requirements, at the top level (for example from User stories or Cenelec Standard) and all seems to be derived up to SW level (I understand that only specification and design of SW appear on the figure, not the Validation). But I think that lots of the initial requirements can not be derived on Sw, but on other activities (quality or project plan, Validation,...) or subsystems (HW, API,...); How it is plan to take into account these exported requirements ? >> Agree. "Derive" is not the right general term for all the arrows. Changed figure 1and used "transform" instead of "derive" and better explained that initial requirements are transformed to subsystem and then HW or SW or data or procedures. I improved fig 2 with better alignement on EN 50128:2011 and used only term "input for" for relations between artefacts at this stage of the document. >> V&V not shown at this stage of the document. Added as a note. - some non-functional requirements can be introduced (or derived from Cenelec standards) in openETCS quality or project plans. >> Yes. Do you think we need to show quality and project plans for this document? will those artefacts be >>traced to requirements? - in the fig 2 it seems there is a direct traceability between SRS and Cenelec (orange arrow): I am not agree. >> Removed. I removed initial arrows coming from ISO 15288 vision and focused now on OpenETCS >>only. ISO15288 was just a way to introduce engineering levels and help me understanding scope of >>different requirements and models by asking partners the position in those levels. in the current state of SRS it is difficult to explicitly defined a traceability between this document and stakeholders requirements. I consider more the SRS in midway between stakeholders requirement and a real System specification, I will put it in parallel of Cenelec and User stories. >> OK. Done. - I think validation are missing in fig 1 and 2: lots of requirements can not be derived up to SW code only, but will be link to the different test or V&V phases. >> OK. Which openETCS document can I read to add missing information? - §1.2 and Fig4 , It is necessary to clarify the data dictionary model and how it is defined (textual, SysML, Scade ?) as a Scade representation of it is one of the SW model. >> OK. ToDo. -§2.2.1: - Please give clearly definition of the mining of the different arrows (for example "refines" seems to correspond to a SysML definition which is very different from a classical formal definition of "refines"). - why "Documentation" is an activity ? - why "V&V" do not "use" the requirement database ? - meaning of the arrows are not clear for me, so I do not understand why there are no linked between System model and requirement database or functional model and requirement data base. The figure need some comments as it is not self-sufficient for those who are not used of these notations. >> perfectly agree. I had almost same remarks than you when reading this figure the first time and I did >>not dare to remove it until now because it was not mine and because I thought it was "validated" after >>a previous review. As soon as I can express the traceability process through other diagrams easier to >>understand I will remove this initial figure. - §2.2.2: This means we consider only functional requirements. User stories, SRS, API or Cenelec are far to contain only functional requirements. >> yes because I wanted to focus on Functional formal model that seemed to be "functional". But I >>understand that this model is also behavioral and that we target an executable model, so containing >>non functional requirements. Will update this scenario with other non functional requirements taken >>into account. - Fig 7 : I do not think that the "openETCS system designer" is in charge of all the actions. Typically "trace model element to SRS" is made by SW designer, "Create verification view" by a verificator.... >> OK. This was a "generic" term used to simplify diagram (showing several actors would make it too >>large). I will use a more generic term and will precise the different possible roles according to activities. - §1 and 2 : Maybe it will be nice to have a look on QA plan (WP1 https://github.com/openETCS/governance/blob/master/QA%20Plan/D1.3.1_QA_Plan.pdf), definition plan (WP2 https://github.com/openETCS/requirements) and safety plan (WP4 https://github.com/openETCS/validation/tree/master/Reports/D4.2) to have a better view of what would be expected at the beginning of the project. >> OK. thanks for the reference. - §3 Ok for me. -§4.2.3, for the moment the tool is Scade studio (v16.2) >> mistake. fixed. - §5, in the view of the openETCS toolchain, totally open, I am agree with the left branch (ProR linked to papyrus). However in practice the sysML model has been made with Scade system which contains an old version of papyrus not really compatible with the one in openETCS toolchain. In this case I'am not sure that ProR can be used at system level (which do not allow us to have an open-source tool for traceability !) >> OK. will take that into account. - § 5.1.2: How is identify the first sentence "If the establishment....." ? Are we sure that we shall always share such a requirement in different sub requirements with different Id ? Are we not going to lost information (for example in this case that ALL the sequence of actions shall be made in a given order) ? >> This is initial text (I did not change that assuming that it was validated). I'll look at your point. - §5, 6 and 7: Three solutions are proposed: -why ? maybe an introduction in the document is missing to explain its contents and why 3 solutions are proposed >> Well: that might be a question of document organization. First version of document mentioned 1 first >> solution and I understood that this traceability solution was far from being perfect. So I have decided >> to investigate on possible improvements through alternate solutions. >> If this document reflects what IS DONE in the project, then I must focus on the reality only and >>perhaps conclude the document with "current limits". In that case I can create another document that >>would be "proposals for improvements of traceability support by the tool chain". - some parts of some solutions are already implemented or largely analyzed (eg. link between ProR and payprus, use of genDoc...) other seems just propositions. It will be nice to have a clear view of what exists and can be used right now, and other elements. >> OK. I will distinguish between existing (tested) solutions and ideas for improvements. To continue depending updating and comments.
1.0
Review of tracability Architecture (ends 12-Nov-2015) - Here my comments on the document linked to #504 - § 1.1 and Fig 2: - in the figure are mixed functionnal, HW, procedural,... requirements, at the top level (for example from User stories or Cenelec Standard) and all seems to be derived up to SW level (I understand that only specification and design of SW appear on the figure, not the Validation). But I think that lots of the initial requirements can not be derived on Sw, but on other activities (quality or project plan, Validation,...) or subsystems (HW, API,...); How it is plan to take into account these exported requirements ? >> Agree. "Derive" is not the right general term for all the arrows. Changed figure 1and used "transform" instead of "derive" and better explained that initial requirements are transformed to subsystem and then HW or SW or data or procedures. I improved fig 2 with better alignement on EN 50128:2011 and used only term "input for" for relations between artefacts at this stage of the document. >> V&V not shown at this stage of the document. Added as a note. - some non-functional requirements can be introduced (or derived from Cenelec standards) in openETCS quality or project plans. >> Yes. Do you think we need to show quality and project plans for this document? will those artefacts be >>traced to requirements? - in the fig 2 it seems there is a direct traceability between SRS and Cenelec (orange arrow): I am not agree. >> Removed. I removed initial arrows coming from ISO 15288 vision and focused now on OpenETCS >>only. ISO15288 was just a way to introduce engineering levels and help me understanding scope of >>different requirements and models by asking partners the position in those levels. in the current state of SRS it is difficult to explicitly defined a traceability between this document and stakeholders requirements. I consider more the SRS in midway between stakeholders requirement and a real System specification, I will put it in parallel of Cenelec and User stories. >> OK. Done. - I think validation are missing in fig 1 and 2: lots of requirements can not be derived up to SW code only, but will be link to the different test or V&V phases. >> OK. Which openETCS document can I read to add missing information? - §1.2 and Fig4 , It is necessary to clarify the data dictionary model and how it is defined (textual, SysML, Scade ?) as a Scade representation of it is one of the SW model. >> OK. ToDo. -§2.2.1: - Please give clearly definition of the mining of the different arrows (for example "refines" seems to correspond to a SysML definition which is very different from a classical formal definition of "refines"). - why "Documentation" is an activity ? - why "V&V" do not "use" the requirement database ? - meaning of the arrows are not clear for me, so I do not understand why there are no linked between System model and requirement database or functional model and requirement data base. The figure need some comments as it is not self-sufficient for those who are not used of these notations. >> perfectly agree. I had almost same remarks than you when reading this figure the first time and I did >>not dare to remove it until now because it was not mine and because I thought it was "validated" after >>a previous review. As soon as I can express the traceability process through other diagrams easier to >>understand I will remove this initial figure. - §2.2.2: This means we consider only functional requirements. User stories, SRS, API or Cenelec are far to contain only functional requirements. >> yes because I wanted to focus on Functional formal model that seemed to be "functional". But I >>understand that this model is also behavioral and that we target an executable model, so containing >>non functional requirements. Will update this scenario with other non functional requirements taken >>into account. - Fig 7 : I do not think that the "openETCS system designer" is in charge of all the actions. Typically "trace model element to SRS" is made by SW designer, "Create verification view" by a verificator.... >> OK. This was a "generic" term used to simplify diagram (showing several actors would make it too >>large). I will use a more generic term and will precise the different possible roles according to activities. - §1 and 2 : Maybe it will be nice to have a look on QA plan (WP1 https://github.com/openETCS/governance/blob/master/QA%20Plan/D1.3.1_QA_Plan.pdf), definition plan (WP2 https://github.com/openETCS/requirements) and safety plan (WP4 https://github.com/openETCS/validation/tree/master/Reports/D4.2) to have a better view of what would be expected at the beginning of the project. >> OK. thanks for the reference. - §3 Ok for me. -§4.2.3, for the moment the tool is Scade studio (v16.2) >> mistake. fixed. - §5, in the view of the openETCS toolchain, totally open, I am agree with the left branch (ProR linked to papyrus). However in practice the sysML model has been made with Scade system which contains an old version of papyrus not really compatible with the one in openETCS toolchain. In this case I'am not sure that ProR can be used at system level (which do not allow us to have an open-source tool for traceability !) >> OK. will take that into account. - § 5.1.2: How is identify the first sentence "If the establishment....." ? Are we sure that we shall always share such a requirement in different sub requirements with different Id ? Are we not going to lost information (for example in this case that ALL the sequence of actions shall be made in a given order) ? >> This is initial text (I did not change that assuming that it was validated). I'll look at your point. - §5, 6 and 7: Three solutions are proposed: -why ? maybe an introduction in the document is missing to explain its contents and why 3 solutions are proposed >> Well: that might be a question of document organization. First version of document mentioned 1 first >> solution and I understood that this traceability solution was far from being perfect. So I have decided >> to investigate on possible improvements through alternate solutions. >> If this document reflects what IS DONE in the project, then I must focus on the reality only and >>perhaps conclude the document with "current limits". In that case I can create another document that >>would be "proposals for improvements of traceability support by the tool chain". - some parts of some solutions are already implemented or largely analyzed (eg. link between ProR and payprus, use of genDoc...) other seems just propositions. It will be nice to have a clear view of what exists and can be used right now, and other elements. >> OK. I will distinguish between existing (tested) solutions and ideas for improvements. To continue depending updating and comments.
non_main
review of tracability architecture ends nov here my comments on the document linked to § and fig in the figure are mixed functionnal hw procedural requirements at the top level for example from user stories or cenelec standard and all seems to be derived up to sw level i understand that only specification and design of sw appear on the figure not the validation but i think that lots of the initial requirements can not be derived on sw but on other activities quality or project plan validation or subsystems hw api how it is plan to take into account these exported requirements agree derive is not the right general term for all the arrows changed figure used transform instead of derive and better explained that initial requirements are transformed to subsystem and then hw or sw or data or procedures i improved fig with better alignement on en and used only term input for for relations between artefacts at this stage of the document v v not shown at this stage of the document added as a note some non functional requirements can be introduced or derived from cenelec standards in openetcs quality or project plans yes do you think we need to show quality and project plans for this document will those artefacts be traced to requirements in the fig it seems there is a direct traceability between srs and cenelec orange arrow i am not agree removed i removed initial arrows coming from iso vision and focused now on openetcs only was just a way to introduce engineering levels and help me understanding scope of different requirements and models by asking partners the position in those levels in the current state of srs it is difficult to explicitly defined a traceability between this document and stakeholders requirements i consider more the srs in midway between stakeholders requirement and a real system specification i will put it in parallel of cenelec and user stories ok done i think validation are missing in fig and lots of requirements can not be derived up to sw code only but will be link to the different test or v v phases ok which openetcs document can i read to add missing information § and it is necessary to clarify the data dictionary model and how it is defined textual sysml scade as a scade representation of it is one of the sw model ok todo § please give clearly definition of the mining of the different arrows for example refines seems to correspond to a sysml definition which is very different from a classical formal definition of refines why documentation is an activity why v v do not use the requirement database meaning of the arrows are not clear for me so i do not understand why there are no linked between system model and requirement database or functional model and requirement data base the figure need some comments as it is not self sufficient for those who are not used of these notations perfectly agree i had almost same remarks than you when reading this figure the first time and i did not dare to remove it until now because it was not mine and because i thought it was validated after a previous review as soon as i can express the traceability process through other diagrams easier to understand i will remove this initial figure § this means we consider only functional requirements user stories srs api or cenelec are far to contain only functional requirements yes because i wanted to focus on functional formal model that seemed to be functional but i understand that this model is also behavioral and that we target an executable model so containing non functional requirements will update this scenario with other non functional requirements taken into account fig i do not think that the openetcs system designer is in charge of all the actions typically trace model element to srs is made by sw designer create verification view by a verificator ok this was a generic term used to simplify diagram showing several actors would make it too large i will use a more generic term and will precise the different possible roles according to activities § and maybe it will be nice to have a look on qa plan definition plan and safety plan to have a better view of what would be expected at the beginning of the project ok thanks for the reference § ok for me § for the moment the tool is scade studio mistake fixed § in the view of the openetcs toolchain totally open i am agree with the left branch pror linked to papyrus however in practice the sysml model has been made with scade system which contains an old version of papyrus not really compatible with the one in openetcs toolchain in this case i am not sure that pror can be used at system level which do not allow us to have an open source tool for traceability ok will take that into account § how is identify the first sentence if the establishment are we sure that we shall always share such a requirement in different sub requirements with different id are we not going to lost information for example in this case that all the sequence of actions shall be made in a given order this is initial text i did not change that assuming that it was validated i ll look at your point § and three solutions are proposed why maybe an introduction in the document is missing to explain its contents and why solutions are proposed well that might be a question of document organization first version of document mentioned first solution and i understood that this traceability solution was far from being perfect so i have decided to investigate on possible improvements through alternate solutions if this document reflects what is done in the project then i must focus on the reality only and perhaps conclude the document with current limits in that case i can create another document that would be proposals for improvements of traceability support by the tool chain some parts of some solutions are already implemented or largely analyzed eg link between pror and payprus use of gendoc other seems just propositions it will be nice to have a clear view of what exists and can be used right now and other elements ok i will distinguish between existing tested solutions and ideas for improvements to continue depending updating and comments
0
370,337
10,928,173,853
IssuesEvent
2019-11-22 18:25:06
Javacord/Javacord
https://api.github.com/repos/Javacord/Javacord
closed
Prevent hitting the websocket ratelimits
audio bug high priority resolved
Javacord should automatically throttle websocket packets to prevent hitting the 120/60 ratelimit. At the moment it's nearly impossible to hit this ratelimit, but it might become a problem once Javacord has audio support.
1.0
Prevent hitting the websocket ratelimits - Javacord should automatically throttle websocket packets to prevent hitting the 120/60 ratelimit. At the moment it's nearly impossible to hit this ratelimit, but it might become a problem once Javacord has audio support.
non_main
prevent hitting the websocket ratelimits javacord should automatically throttle websocket packets to prevent hitting the ratelimit at the moment it s nearly impossible to hit this ratelimit but it might become a problem once javacord has audio support
0
3,408
13,181,844,107
IssuesEvent
2020-08-12 14:54:35
duo-labs/cloudmapper
https://api.github.com/repos/duo-labs/cloudmapper
closed
Create virtual tags for filtering
map unmaintained_functionality
Now that we can filter by tag, I'd like to have virtual tags created. Specifically, I'd like to be able to filter by resource type, so to do that, I could have a virtual tag created automatically for that. To avoid collisions, I'll call it `__resource_type`
True
Create virtual tags for filtering - Now that we can filter by tag, I'd like to have virtual tags created. Specifically, I'd like to be able to filter by resource type, so to do that, I could have a virtual tag created automatically for that. To avoid collisions, I'll call it `__resource_type`
main
create virtual tags for filtering now that we can filter by tag i d like to have virtual tags created specifically i d like to be able to filter by resource type so to do that i could have a virtual tag created automatically for that to avoid collisions i ll call it resource type
1
3,679
15,037,108,195
IssuesEvent
2021-02-02 15:59:03
IITIDIDX597/sp_2021_team1
https://api.github.com/repos/IITIDIDX597/sp_2021_team1
opened
Most frequently annotated content (analyst side)
Epic: 4 Personal control of information Epic: 5 Maintaining the system Story Week 3
**Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care. **Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform. **Sub-Hill Statements:** 1. Clinicians can choose what information and knowledge is more relevant to their patient's needs, and have the ability to highlight, annotate, and save that specific information to their personal folder for reference in the future ### **Story Details:** As a: Analyst I want: to see what is most frequently highlighted / annotated by clinicians So that: I can compare use patterns and find efficiencies
True
Most frequently annotated content (analyst side) - **Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care. **Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform. **Sub-Hill Statements:** 1. Clinicians can choose what information and knowledge is more relevant to their patient's needs, and have the ability to highlight, annotate, and save that specific information to their personal folder for reference in the future ### **Story Details:** As a: Analyst I want: to see what is most frequently highlighted / annotated by clinicians So that: I can compare use patterns and find efficiencies
main
most frequently annotated content analyst side project goal s lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way while at the same time foster deeper learning experiences in order to deliver better abilitylab patient care hill statement individual clinicians can reference relevant continuously evolving information for their patient s therapy needs to self manage their approach patient care plan development in a single platform sub hill statements clinicians can choose what information and knowledge is more relevant to their patient s needs and have the ability to highlight annotate and save that specific information to their personal folder for reference in the future story details as a analyst i want to see what is most frequently highlighted annotated by clinicians so that i can compare use patterns and find efficiencies
1
286,034
8,782,948,439
IssuesEvent
2018-12-20 02:55:38
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
opened
[studio] Delete site in cluster must propagate
enhancement priority: medium
When deleting a site in a cluster, the Studio receiving the request deletes the site from disk, but other cluster members don't. Let's discuss options. One idea: - Mark site as deleted - Delete site from disk if it's marked as deleted in DB during the sync cycle Downside: can't re-create the site with the same name. The reason to keep the DB entry with that site name is: consider an old node that has sync'd in the past being brought back into the cluster, it will have the site on disk, but not in DB and it won't know. Also consider the same example with the node coming back online with an old git repo but the same site id re-created. One enhancement is to add a site_id that's a UUID instead of our site_id today, and having a site name. This implies sites can exist with the same name within one organization. Much to discuss.
1.0
[studio] Delete site in cluster must propagate - When deleting a site in a cluster, the Studio receiving the request deletes the site from disk, but other cluster members don't. Let's discuss options. One idea: - Mark site as deleted - Delete site from disk if it's marked as deleted in DB during the sync cycle Downside: can't re-create the site with the same name. The reason to keep the DB entry with that site name is: consider an old node that has sync'd in the past being brought back into the cluster, it will have the site on disk, but not in DB and it won't know. Also consider the same example with the node coming back online with an old git repo but the same site id re-created. One enhancement is to add a site_id that's a UUID instead of our site_id today, and having a site name. This implies sites can exist with the same name within one organization. Much to discuss.
non_main
delete site in cluster must propagate when deleting a site in a cluster the studio receiving the request deletes the site from disk but other cluster members don t let s discuss options one idea mark site as deleted delete site from disk if it s marked as deleted in db during the sync cycle downside can t re create the site with the same name the reason to keep the db entry with that site name is consider an old node that has sync d in the past being brought back into the cluster it will have the site on disk but not in db and it won t know also consider the same example with the node coming back online with an old git repo but the same site id re created one enhancement is to add a site id that s a uuid instead of our site id today and having a site name this implies sites can exist with the same name within one organization much to discuss
0
323,318
27,715,093,073
IssuesEvent
2023-03-14 16:26:09
pc2ccs/pc2v9
https://api.github.com/repos/pc2ccs/pc2v9
closed
Shadow unable to fetch files for submission(s)
bug high priority Shadow CLICS CCS Update NEXT Contest
**Describe the issue**: When the shadow attempts to fetch files the DOMJudge CCS API,fails to return the files (zip). The file is fetched by pc2 shadow using a URL of the form: http://<IPADDR>/domjudge/api/v4//contests/2/submissions/1/files That will fail, the shadow is configured with: http://<IPADDR>/domjudge/api/v4/ for the Primary CCS URL field The URL that will succeed is (This is per the CCS API Spec). http://<IPADDR>/domjudge/api/v4/contests/2/submissions/1/files Note that there is one slash after the /v4 in the path. Note that Kattis does fetch files given the URL **To Reproduce**: todo: State Version of DOM Judge todo: state version of pc2 Configure both DOM Judge and pc2 as shadow todo: configure DOM Judge * In Config, set Data Source field to Configuration Data External (the default is All Local, which will not output the latest CCS EF) todo:: configure pc2 as shadow, start shadowing **Expected behavior**: The files for the submission should be fetched and submitted to pc2 server. **Actual behavior**: todo assuming: Files not fetched and the submission could not be added to the pc2 server. See Log info below for example of exception/error message **Environment**: **Log Info**: upon failure to read API end point 220830 172111.227|INFO|RemoteEventFeedMonitorThread|log|Fetching files from remote system using id 1 220830 172111.246|WARNING|RemoteEventFeedMonitorThread|log|Exception processing event: {"id":"59","type":"submissions","op":"create","data":{"language_id":"cpp","time":"2022-08-30T16:28:01.626-04:00","contest_time":"14553:28:01.626","team_id":"4","problem_id":"3","id":"1","external_id":null,"entry_point":null,"files":[{"href":"contests/2/submissions/1/files","mime":"application/zip"}]},"time":"2022-08-30T16:28:01.680-04:00"} |java.lang.RuntimeException: java.io.FileNotFoundException: http://10.0.0.169/domjudge/api/v4/contests/2//submissions/1/files | at edu.csus.ecs.pc2.shadow.RemoteContestAPIAdapter.getRemoteSubmissionFiles (RemoteContestAPIAdapter.java:281) | at edu.csus.ecs.pc2.shadow.RemoteContestAPIAdapter.getRemoteSubmissionFiles (RemoteContestAPIAdapter.java:250) | at edu.csus.ecs.pc2.shadow.RemoteEventFeedMonitor.run (RemoteEventFeedMonitor.java:369) | at java.lang.Thread.run (Thread.java:750) **Screenshots**: **Additional context**:
1.0
Shadow unable to fetch files for submission(s) - **Describe the issue**: When the shadow attempts to fetch files the DOMJudge CCS API,fails to return the files (zip). The file is fetched by pc2 shadow using a URL of the form: http://<IPADDR>/domjudge/api/v4//contests/2/submissions/1/files That will fail, the shadow is configured with: http://<IPADDR>/domjudge/api/v4/ for the Primary CCS URL field The URL that will succeed is (This is per the CCS API Spec). http://<IPADDR>/domjudge/api/v4/contests/2/submissions/1/files Note that there is one slash after the /v4 in the path. Note that Kattis does fetch files given the URL **To Reproduce**: todo: State Version of DOM Judge todo: state version of pc2 Configure both DOM Judge and pc2 as shadow todo: configure DOM Judge * In Config, set Data Source field to Configuration Data External (the default is All Local, which will not output the latest CCS EF) todo:: configure pc2 as shadow, start shadowing **Expected behavior**: The files for the submission should be fetched and submitted to pc2 server. **Actual behavior**: todo assuming: Files not fetched and the submission could not be added to the pc2 server. See Log info below for example of exception/error message **Environment**: **Log Info**: upon failure to read API end point 220830 172111.227|INFO|RemoteEventFeedMonitorThread|log|Fetching files from remote system using id 1 220830 172111.246|WARNING|RemoteEventFeedMonitorThread|log|Exception processing event: {"id":"59","type":"submissions","op":"create","data":{"language_id":"cpp","time":"2022-08-30T16:28:01.626-04:00","contest_time":"14553:28:01.626","team_id":"4","problem_id":"3","id":"1","external_id":null,"entry_point":null,"files":[{"href":"contests/2/submissions/1/files","mime":"application/zip"}]},"time":"2022-08-30T16:28:01.680-04:00"} |java.lang.RuntimeException: java.io.FileNotFoundException: http://10.0.0.169/domjudge/api/v4/contests/2//submissions/1/files | at edu.csus.ecs.pc2.shadow.RemoteContestAPIAdapter.getRemoteSubmissionFiles (RemoteContestAPIAdapter.java:281) | at edu.csus.ecs.pc2.shadow.RemoteContestAPIAdapter.getRemoteSubmissionFiles (RemoteContestAPIAdapter.java:250) | at edu.csus.ecs.pc2.shadow.RemoteEventFeedMonitor.run (RemoteEventFeedMonitor.java:369) | at java.lang.Thread.run (Thread.java:750) **Screenshots**: **Additional context**:
non_main
shadow unable to fetch files for submission s describe the issue when the shadow attempts to fetch files the domjudge ccs api fails to return the files zip the file is fetched by shadow using a url of the form that will fail the shadow is configured with for the primary ccs url field the url that will succeed is this is per the ccs api spec note that there is one slash after the in the path note that kattis does fetch files given the url to reproduce todo state version of dom judge todo state version of configure both dom judge and as shadow todo configure dom judge in config set data source field to configuration data external the default is all local which will not output the latest ccs ef todo configure as shadow start shadowing expected behavior the files for the submission should be fetched and submitted to server actual behavior todo assuming files not fetched and the submission could not be added to the server see log info below for example of exception error message environment log info upon failure to read api end point info remoteeventfeedmonitorthread log fetching files from remote system using id warning remoteeventfeedmonitorthread log exception processing event id type submissions op create data language id cpp time contest time team id problem id id external id null entry point null files time java lang runtimeexception java io filenotfoundexception at edu csus ecs shadow remotecontestapiadapter getremotesubmissionfiles remotecontestapiadapter java at edu csus ecs shadow remotecontestapiadapter getremotesubmissionfiles remotecontestapiadapter java at edu csus ecs shadow remoteeventfeedmonitor run remoteeventfeedmonitor java at java lang thread run thread java screenshots additional context
0
1,610
6,572,626,331
IssuesEvent
2017-09-11 03:52:17
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Smart quotes in consul_session desc. block cause Ansible to error.
affects_2.2 bot_broken bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME consul_session ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT centos 7 ##### SUMMARY [Smart-quotes in module code description block](https://github.com/ansible/ansible-modules-extras/blob/devel/clustering/consul_session.py#L106) causes ansible to barf. ##### STEPS TO REPRODUCE <!--- Paste example playbooks or commands between quotes below --> ``` - name: Try to get or create session lock in consul consul_session: name: ucp_primary_lock datacenter: development state: present validate_certs: False register: ucp_consul_session_id ``` ##### EXPECTED RESULTS No errors relating to non-ASCII issue ##### ACTUAL RESULTS ``` {"changed": false, "failed": true, "module_stderr": " File \"/tmp/ansible_jP41R1/ansible_module_consul_session.py\", line 106\nSyntaxError: Non-ASCII character '\\xe2' in file /tmp/ansible_jP41R1/ansible_module_consul_session.py on line 107, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details\n", "module_stdout": "", "msg": "MODULE FAILURE"} ```
True
Smart quotes in consul_session desc. block cause Ansible to error. - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME consul_session ##### ANSIBLE VERSION ``` ansible 2.2.0 ``` ##### CONFIGURATION n/a ##### OS / ENVIRONMENT centos 7 ##### SUMMARY [Smart-quotes in module code description block](https://github.com/ansible/ansible-modules-extras/blob/devel/clustering/consul_session.py#L106) causes ansible to barf. ##### STEPS TO REPRODUCE <!--- Paste example playbooks or commands between quotes below --> ``` - name: Try to get or create session lock in consul consul_session: name: ucp_primary_lock datacenter: development state: present validate_certs: False register: ucp_consul_session_id ``` ##### EXPECTED RESULTS No errors relating to non-ASCII issue ##### ACTUAL RESULTS ``` {"changed": false, "failed": true, "module_stderr": " File \"/tmp/ansible_jP41R1/ansible_module_consul_session.py\", line 106\nSyntaxError: Non-ASCII character '\\xe2' in file /tmp/ansible_jP41R1/ansible_module_consul_session.py on line 107, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details\n", "module_stdout": "", "msg": "MODULE FAILURE"} ```
main
smart quotes in consul session desc block cause ansible to error issue type bug report component name consul session ansible version ansible configuration n a os environment centos summary causes ansible to barf steps to reproduce name try to get or create session lock in consul consul session name ucp primary lock datacenter development state present validate certs false register ucp consul session id expected results no errors relating to non ascii issue actual results changed false failed true module stderr file tmp ansible ansible module consul session py line nsyntaxerror non ascii character in file tmp ansible ansible module consul session py on line but no encoding declared see for details n module stdout msg module failure
1
575,328
17,027,560,299
IssuesEvent
2021-07-03 21:37:15
yairEO/tagify
https://api.github.com/repos/yairEO/tagify
closed
Single change event instead of multiple events with addTags
Bug: high priority
The addTags API results in calling change and add event multiple times once for each tag. Is it possible to instead raise a single bulk event for change? Otherwise, if the change handler does something expensive then it ends up being done 3 times.
1.0
Single change event instead of multiple events with addTags - The addTags API results in calling change and add event multiple times once for each tag. Is it possible to instead raise a single bulk event for change? Otherwise, if the change handler does something expensive then it ends up being done 3 times.
non_main
single change event instead of multiple events with addtags the addtags api results in calling change and add event multiple times once for each tag is it possible to instead raise a single bulk event for change otherwise if the change handler does something expensive then it ends up being done times
0
195,363
14,725,715,451
IssuesEvent
2021-01-06 05:23:21
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
cliqueinc/mysql-wear: ops_test.go; 3 LoC
fresh test tiny
Found a possible issue in [cliqueinc/mysql-wear](https://www.github.com/cliqueinc/mysql-wear) at [ops_test.go](https://github.com/cliqueinc/mysql-wear/blob/a82c4c5d1d7fd07d5db5f5ccdade30b75216ce38/ops_test.go#L1015-L1017) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > [Click here to see the code in its original context.](https://github.com/cliqueinc/mysql-wear/blob/a82c4c5d1d7fd07d5db5f5ccdade30b75216ce38/ops_test.go#L1015-L1017) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, r := range userRows { db.MustInsert(&r) } ``` </details> <details> <summary>Click here to show extra information the analyzer produced.</summary> ``` No path was found through the callgraph that could lead to a function which writes a pointer argument. No path was found through the callgraph that could lead to a function which passes a pointer to third-party code. root signature {MustInsert 1} was not found in the callgraph; reference was passed directly to third-party code ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: a82c4c5d1d7fd07d5db5f5ccdade30b75216ce38
1.0
cliqueinc/mysql-wear: ops_test.go; 3 LoC - Found a possible issue in [cliqueinc/mysql-wear](https://www.github.com/cliqueinc/mysql-wear) at [ops_test.go](https://github.com/cliqueinc/mysql-wear/blob/a82c4c5d1d7fd07d5db5f5ccdade30b75216ce38/ops_test.go#L1015-L1017) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > [Click here to see the code in its original context.](https://github.com/cliqueinc/mysql-wear/blob/a82c4c5d1d7fd07d5db5f5ccdade30b75216ce38/ops_test.go#L1015-L1017) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, r := range userRows { db.MustInsert(&r) } ``` </details> <details> <summary>Click here to show extra information the analyzer produced.</summary> ``` No path was found through the callgraph that could lead to a function which writes a pointer argument. No path was found through the callgraph that could lead to a function which passes a pointer to third-party code. root signature {MustInsert 1} was not found in the callgraph; reference was passed directly to third-party code ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: a82c4c5d1d7fd07d5db5f5ccdade30b75216ce38
non_main
cliqueinc mysql wear ops test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message click here to show the line s of go which triggered the analyzer go for r range userrows db mustinsert r click here to show extra information the analyzer produced no path was found through the callgraph that could lead to a function which writes a pointer argument no path was found through the callgraph that could lead to a function which passes a pointer to third party code root signature mustinsert was not found in the callgraph reference was passed directly to third party code leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
151,801
13,435,028,734
IssuesEvent
2020-09-07 12:19:08
spring-projects/spring-data-r2dbc
https://api.github.com/repos/spring-projects/spring-data-r2dbc
closed
Syntax error in reference documentation at Query with SpEL expressions
type: documentation
website: https://docs.spring.io/spring-data/r2dbc/docs/1.1.3.RELEASE/reference/html/#r2dbc.repositories ` @Query("SELECT * FROM person WHERE lastname = :#{[0]} }") List<Person> findByQueryWithExpression(String lastname); ` shoud be @Query("SELECT * FROM person WHERE lastname = :#{[0]}")
1.0
Syntax error in reference documentation at Query with SpEL expressions - website: https://docs.spring.io/spring-data/r2dbc/docs/1.1.3.RELEASE/reference/html/#r2dbc.repositories ` @Query("SELECT * FROM person WHERE lastname = :#{[0]} }") List<Person> findByQueryWithExpression(String lastname); ` shoud be @Query("SELECT * FROM person WHERE lastname = :#{[0]}")
non_main
syntax error in reference documentation at query with spel expressions website: query select from person where lastname list findbyquerywithexpression string lastname shoud be query select from person where lastname
0
172,165
21,040,461,882
IssuesEvent
2022-03-31 11:51:25
samq-ghdemo/SEARCH-NCJIS-nibrs
https://api.github.com/repos/samq-ghdemo/SEARCH-NCJIS-nibrs
opened
CVE-2022-27772 (Medium) detected in multiple libraries
security vulnerability
## CVE-2022-27772 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-boot-2.0.5.RELEASE.jar</b>, <b>spring-boot-1.5.7.RELEASE.jar</b>, <b>spring-boot-2.1.5.RELEASE.jar</b></p></summary> <p> <details><summary><b>spring-boot-2.0.5.RELEASE.jar</b></p></summary> <p>Spring Boot</p> <p>Library home page: <a href="https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot">https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot</a></p> <p>Path to dependency file: /tools/nibrs-xmlfile/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar</p> <p> Dependency Hierarchy: - :x: **spring-boot-2.0.5.RELEASE.jar** (Vulnerable Library) </details> <details><summary><b>spring-boot-1.5.7.RELEASE.jar</b></p></summary> <p>Spring Boot</p> <p>Library home page: <a href="http://projects.spring.io/spring-boot/">http://projects.spring.io/spring-boot/</a></p> <p>Path to dependency file: /tools/nibrs-fbi-service/pom.xml</p> <p>Path to vulnerable library: /tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-boot-1.5.7.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/1.5.7.RELEASE/spring-boot-1.5.7.RELEASE.jar</p> <p> Dependency Hierarchy: - :x: **spring-boot-1.5.7.RELEASE.jar** (Vulnerable Library) </details> <details><summary><b>spring-boot-2.1.5.RELEASE.jar</b></p></summary> <p>Spring Boot</p> <p>Library home page: <a href="https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot">https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot</a></p> <p>Path to dependency file: /tools/nibrs-summary-report-common/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.1.5.RELEASE/spring-boot-2.1.5.RELEASE.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library) - spring-boot-starter-2.1.5.RELEASE.jar - :x: **spring-boot-2.1.5.RELEASE.jar** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer. <p>Publish Date: 2022-03-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27772>CVE-2022-27772</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/JLLeitschuh/security-research/security/advisories/GHSA-cm59-pr5q-cw85">https://github.com/JLLeitschuh/security-research/security/advisories/GHSA-cm59-pr5q-cw85</a></p> <p>Release Date: 2022-03-30</p> <p>Fix Resolution: org.springframework.boot:spring-boot:2.2.11.RELEASE</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.boot","packageName":"spring-boot","packageVersion":"2.0.5.RELEASE","packageFilePaths":["/tools/nibrs-xmlfile/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework.boot:spring-boot:2.0.5.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.boot:spring-boot:2.2.11.RELEASE","isBinary":false},{"packageType":"Java","groupId":"org.springframework.boot","packageName":"spring-boot","packageVersion":"1.5.7.RELEASE","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework.boot:spring-boot:1.5.7.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.boot:spring-boot:2.2.11.RELEASE","isBinary":false},{"packageType":"Java","groupId":"org.springframework.boot","packageName":"spring-boot","packageVersion":"2.1.5.RELEASE","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter:2.1.5.RELEASE;org.springframework.boot:spring-boot:2.1.5.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.boot:spring-boot:2.2.11.RELEASE","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-27772","vulnerabilityDetails":"** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27772","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2022-27772 (Medium) detected in multiple libraries - ## CVE-2022-27772 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-boot-2.0.5.RELEASE.jar</b>, <b>spring-boot-1.5.7.RELEASE.jar</b>, <b>spring-boot-2.1.5.RELEASE.jar</b></p></summary> <p> <details><summary><b>spring-boot-2.0.5.RELEASE.jar</b></p></summary> <p>Spring Boot</p> <p>Library home page: <a href="https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot">https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot</a></p> <p>Path to dependency file: /tools/nibrs-xmlfile/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar</p> <p> Dependency Hierarchy: - :x: **spring-boot-2.0.5.RELEASE.jar** (Vulnerable Library) </details> <details><summary><b>spring-boot-1.5.7.RELEASE.jar</b></p></summary> <p>Spring Boot</p> <p>Library home page: <a href="http://projects.spring.io/spring-boot/">http://projects.spring.io/spring-boot/</a></p> <p>Path to dependency file: /tools/nibrs-fbi-service/pom.xml</p> <p>Path to vulnerable library: /tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-boot-1.5.7.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/1.5.7.RELEASE/spring-boot-1.5.7.RELEASE.jar</p> <p> Dependency Hierarchy: - :x: **spring-boot-1.5.7.RELEASE.jar** (Vulnerable Library) </details> <details><summary><b>spring-boot-2.1.5.RELEASE.jar</b></p></summary> <p>Spring Boot</p> <p>Library home page: <a href="https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot">https://projects.spring.io/spring-boot/#/spring-boot-parent/spring-boot</a></p> <p>Path to dependency file: /tools/nibrs-summary-report-common/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.1.5.RELEASE/spring-boot-2.1.5.RELEASE.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library) - spring-boot-starter-2.1.5.RELEASE.jar - :x: **spring-boot-2.1.5.RELEASE.jar** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer. <p>Publish Date: 2022-03-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27772>CVE-2022-27772</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/JLLeitschuh/security-research/security/advisories/GHSA-cm59-pr5q-cw85">https://github.com/JLLeitschuh/security-research/security/advisories/GHSA-cm59-pr5q-cw85</a></p> <p>Release Date: 2022-03-30</p> <p>Fix Resolution: org.springframework.boot:spring-boot:2.2.11.RELEASE</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.springframework.boot","packageName":"spring-boot","packageVersion":"2.0.5.RELEASE","packageFilePaths":["/tools/nibrs-xmlfile/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework.boot:spring-boot:2.0.5.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.boot:spring-boot:2.2.11.RELEASE","isBinary":false},{"packageType":"Java","groupId":"org.springframework.boot","packageName":"spring-boot","packageVersion":"1.5.7.RELEASE","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.springframework.boot:spring-boot:1.5.7.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.boot:spring-boot:2.2.11.RELEASE","isBinary":false},{"packageType":"Java","groupId":"org.springframework.boot","packageName":"spring-boot","packageVersion":"2.1.5.RELEASE","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter:2.1.5.RELEASE;org.springframework.boot:spring-boot:2.1.5.RELEASE","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.springframework.boot:spring-boot:2.2.11.RELEASE","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-27772","vulnerabilityDetails":"** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27772","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
non_main
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries spring boot release jar spring boot release jar spring boot release jar spring boot release jar spring boot library home page a href path to dependency file tools nibrs xmlfile pom xml path to vulnerable library home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar web nibrs web target nibrs web web inf lib spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar dependency hierarchy x spring boot release jar vulnerable library spring boot release jar spring boot library home page a href path to dependency file tools nibrs fbi service pom xml path to vulnerable library tools nibrs fbi service target nibrs fbi service web inf lib spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar dependency hierarchy x spring boot release jar vulnerable library spring boot release jar spring boot library home page a href path to dependency file tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository org springframework boot spring boot release spring boot release jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar x spring boot release jar vulnerable library found in base branch master vulnerability details unsupported when assigned spring boot versions prior to version release was vulnerable to temporary directory hijacking this vulnerability impacted the org springframework boot web server abstractconfigurablewebserverfactory createtempdir method note this vulnerability only affects products and or versions that are no longer supported by the maintainer publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework boot spring boot release check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org springframework boot spring boot release isminimumfixversionavailable true minimumfixversion org springframework boot spring boot release isbinary false packagetype java groupid org springframework boot packagename spring boot packageversion release packagefilepaths istransitivedependency false dependencytree org springframework boot spring boot release isminimumfixversionavailable true minimumfixversion org springframework boot spring boot release isbinary false packagetype java groupid org springframework boot packagename spring boot packageversion release packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter release org springframework boot spring boot release isminimumfixversionavailable true minimumfixversion org springframework boot spring boot release isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails unsupported when assigned spring boot versions prior to version release was vulnerable to temporary directory hijacking this vulnerability impacted the org springframework boot web server abstractconfigurablewebserverfactory createtempdir method note this vulnerability only affects products and or versions that are no longer supported by the maintainer vulnerabilityurl
0
1,596
17,373,848,467
IssuesEvent
2021-07-30 17:40:30
microsoft/pxt-arcade
https://api.github.com/repos/microsoft/pxt-arcade
closed
Code destroyed by clicking on javascript & switching back immediately.
needs repro reliability
Hello, My son was working in this program and he had an entire code written. He then clicked on "JavaScript" just to see what his code would look like in JavaScript and when he clicked back to Blocks he lost all his templates and he cannot add any new blocks. Is there any way to restore what he had made? -------- Working on repro.
True
Code destroyed by clicking on javascript & switching back immediately. - Hello, My son was working in this program and he had an entire code written. He then clicked on "JavaScript" just to see what his code would look like in JavaScript and when he clicked back to Blocks he lost all his templates and he cannot add any new blocks. Is there any way to restore what he had made? -------- Working on repro.
non_main
code destroyed by clicking on javascript switching back immediately hello my son was working in this program and he had an entire code written he then clicked on javascript just to see what his code would look like in javascript and when he clicked back to blocks he lost all his templates and he cannot add any new blocks is there any way to restore what he had made working on repro
0
64,266
8,721,361,429
IssuesEvent
2018-12-08 22:14:26
facebook/create-react-app
https://api.github.com/repos/facebook/create-react-app
closed
Unclear default code splitting in cra v2
tag: documentation
<!-- PLEASE READ THE FIRST SECTION :-) --> ### Is this a bug report? If very unclear documentation is a bug then yes. A newly created and built app with `Create react app v2` creates three js chucks by default. But there is not a single line in neither index.js nor in App.js that does code splitting according to this guide https://reactjs.org/docs/code-splitting.html I could not find any info, not in release notes, nor anywhere else why there are three bundles now without any dynamic import statements, what is the point and benefits of having three bundles even for 120K app, and what to do if I want only one js but want to keep other features of cra v2. There is a ticket to disable a code splitting (https://github.com/facebook/create-react-app/issues/5306) but could you please elaborate a bit why it there in the default app in the first place if neither line of code asks for that.
1.0
Unclear default code splitting in cra v2 - <!-- PLEASE READ THE FIRST SECTION :-) --> ### Is this a bug report? If very unclear documentation is a bug then yes. A newly created and built app with `Create react app v2` creates three js chucks by default. But there is not a single line in neither index.js nor in App.js that does code splitting according to this guide https://reactjs.org/docs/code-splitting.html I could not find any info, not in release notes, nor anywhere else why there are three bundles now without any dynamic import statements, what is the point and benefits of having three bundles even for 120K app, and what to do if I want only one js but want to keep other features of cra v2. There is a ticket to disable a code splitting (https://github.com/facebook/create-react-app/issues/5306) but could you please elaborate a bit why it there in the default app in the first place if neither line of code asks for that.
non_main
unclear default code splitting in cra please read the first section is this a bug report if very unclear documentation is a bug then yes a newly created and built app with create react app creates three js chucks by default but there is not a single line in neither index js nor in app js that does code splitting according to this guide i could not find any info not in release notes nor anywhere else why there are three bundles now without any dynamic import statements what is the point and benefits of having three bundles even for app and what to do if i want only one js but want to keep other features of cra there is a ticket to disable a code splitting but could you please elaborate a bit why it there in the default app in the first place if neither line of code asks for that
0
429,637
12,426,671,204
IssuesEvent
2020-05-24 22:28:36
stevenwaterman/musetree
https://api.github.com/repos/stevenwaterman/musetree
opened
Support Lazy loading
Low Priority enhancement hard
Currently, loading a .mst file is slow because it pre-renders the audio for all sections. We could offer the option to only render the audio when you first try and listen to it. This would mean .mst files loaded almost instantly but you would see loading screens while using the app. Another option - do not pre-render audio, render it in real time as it is playing. This would only work on powerful computers but would mean no loading screens ever.
1.0
Support Lazy loading - Currently, loading a .mst file is slow because it pre-renders the audio for all sections. We could offer the option to only render the audio when you first try and listen to it. This would mean .mst files loaded almost instantly but you would see loading screens while using the app. Another option - do not pre-render audio, render it in real time as it is playing. This would only work on powerful computers but would mean no loading screens ever.
non_main
support lazy loading currently loading a mst file is slow because it pre renders the audio for all sections we could offer the option to only render the audio when you first try and listen to it this would mean mst files loaded almost instantly but you would see loading screens while using the app another option do not pre render audio render it in real time as it is playing this would only work on powerful computers but would mean no loading screens ever
0
5,101
26,008,364,298
IssuesEvent
2022-12-20 21:53:01
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Support local development on machines running Podman instead of Docker
type/feature stage/pm-review maintainer/need-response area/local
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). --> ### Describe your idea/feature/enhancement I'm on Fedora 31 which, by default, ships with Podman instead of Docker. By installing `podman-docker` on top, most Docker workflows work pretty great out of the box. I've really bought into this idea mainly because Podman is lighter and doesn't require root priviliges. This fails where tools depend on the Docker's "proprietary" protocol to manage containers, as is the case with SAM CLI. ### Proposal My knowledge about the container ecosystem, the OCI and where tools like Docker (vs. Podman) fit into that exactly is pretty limited. The question is, can tools like AWS SAM be made to work for end-users like me in an easy fashion where the answer is _not_ to install Docker proper? This could be a change to the SAM CLI so as not to have a hard dependency on there being a Docker socket. This could also be a change to Podman where they emulate the Docker API / socket. This could be a change to Python Docker SDK to work with both the Docker API and Podman's varlink-based API. I'm just looking for a solution as an end user. Things to consider: 1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model) -> No
True
Support local development on machines running Podman instead of Docker - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). --> ### Describe your idea/feature/enhancement I'm on Fedora 31 which, by default, ships with Podman instead of Docker. By installing `podman-docker` on top, most Docker workflows work pretty great out of the box. I've really bought into this idea mainly because Podman is lighter and doesn't require root priviliges. This fails where tools depend on the Docker's "proprietary" protocol to manage containers, as is the case with SAM CLI. ### Proposal My knowledge about the container ecosystem, the OCI and where tools like Docker (vs. Podman) fit into that exactly is pretty limited. The question is, can tools like AWS SAM be made to work for end-users like me in an easy fashion where the answer is _not_ to install Docker proper? This could be a change to the SAM CLI so as not to have a hard dependency on there being a Docker socket. This could also be a change to Podman where they emulate the Docker API / socket. This could be a change to Python Docker SDK to work with both the Docker API and Podman's varlink-based API. I'm just looking for a solution as an end user. Things to consider: 1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model) -> No
main
support local development on machines running podman instead of docker describe your idea feature enhancement i m on fedora which by default ships with podman instead of docker by installing podman docker on top most docker workflows work pretty great out of the box i ve really bought into this idea mainly because podman is lighter and doesn t require root priviliges this fails where tools depend on the docker s proprietary protocol to manage containers as is the case with sam cli proposal my knowledge about the container ecosystem the oci and where tools like docker vs podman fit into that exactly is pretty limited the question is can tools like aws sam be made to work for end users like me in an easy fashion where the answer is not to install docker proper this could be a change to the sam cli so as not to have a hard dependency on there being a docker socket this could also be a change to podman where they emulate the docker api socket this could be a change to python docker sdk to work with both the docker api and podman s varlink based api i m just looking for a solution as an end user things to consider will this require any updates to the no
1
3,021
11,185,125,974
IssuesEvent
2019-12-31 22:34:03
laminas/laminas-validator
https://api.github.com/repos/laminas/laminas-validator
opened
Interface to add valid TLDs in Hostname validator
Awaiting Maintainer Response Question
I recently used this validator but found it did not support .car TLD. The company I work for recently purchased such a domain and we needed to allow valid email addresses with that TLD. I needed to extend the existing validator and also extend the EmailAddress validator to be able to reach the TLD list and then could attach the new validator in Apigility and add a list of TLDs, which were pushed onto the array in Hostname validator. It would have been quite useful to just have a public function in Hostname validator to add to the validTlds array. That is what I am proposing to do myself. --- Originally posted by @peterkeatingie at https://github.com/zendframework/zend-validator/issues/117
True
Interface to add valid TLDs in Hostname validator - I recently used this validator but found it did not support .car TLD. The company I work for recently purchased such a domain and we needed to allow valid email addresses with that TLD. I needed to extend the existing validator and also extend the EmailAddress validator to be able to reach the TLD list and then could attach the new validator in Apigility and add a list of TLDs, which were pushed onto the array in Hostname validator. It would have been quite useful to just have a public function in Hostname validator to add to the validTlds array. That is what I am proposing to do myself. --- Originally posted by @peterkeatingie at https://github.com/zendframework/zend-validator/issues/117
main
interface to add valid tlds in hostname validator i recently used this validator but found it did not support car tld the company i work for recently purchased such a domain and we needed to allow valid email addresses with that tld i needed to extend the existing validator and also extend the emailaddress validator to be able to reach the tld list and then could attach the new validator in apigility and add a list of tlds which were pushed onto the array in hostname validator it would have been quite useful to just have a public function in hostname validator to add to the validtlds array that is what i am proposing to do myself originally posted by peterkeatingie at
1
302,025
26,118,181,136
IssuesEvent
2022-12-28 09:15:38
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: failover/non-system/blackhole-recv failed
C-test-failure O-robot O-roachtest branch-master release-blocker T-kv
roachtest.failover/non-system/blackhole-recv [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8107043?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8107043?buildTab=artifacts#/failover/non-system/blackhole-recv) on master @ [9c5375f6a7375724cdbcbaa0029ed97a230d7abe](https://github.com/cockroachdb/cockroach/commits/9c5375f6a7375724cdbcbaa0029ed97a230d7abe): ``` test artifacts and logs in: /artifacts/failover/non-system/blackhole-recv/run_1 (test_impl.go:314).Errorf: test timed out (20m0s) ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_encrypted=false</code> , <code>ROACHTEST_fs=ext4</code> , <code>ROACHTEST_localSSD=true</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/kv-triage <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*failover/non-system/blackhole-recv.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-22812
2.0
roachtest: failover/non-system/blackhole-recv failed - roachtest.failover/non-system/blackhole-recv [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8107043?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8107043?buildTab=artifacts#/failover/non-system/blackhole-recv) on master @ [9c5375f6a7375724cdbcbaa0029ed97a230d7abe](https://github.com/cockroachdb/cockroach/commits/9c5375f6a7375724cdbcbaa0029ed97a230d7abe): ``` test artifacts and logs in: /artifacts/failover/non-system/blackhole-recv/run_1 (test_impl.go:314).Errorf: test timed out (20m0s) ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_encrypted=false</code> , <code>ROACHTEST_fs=ext4</code> , <code>ROACHTEST_localSSD=true</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/kv-triage <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*failover/non-system/blackhole-recv.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-22812
non_main
roachtest failover non system blackhole recv failed roachtest failover non system blackhole recv with on master test artifacts and logs in artifacts failover non system blackhole recv run test impl go errorf test timed out parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb kv triage jira issue crdb
0