Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
1,565
6,572,257,792
IssuesEvent
2017-09-11 00:42:17
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
ec2_vpc_route_table working inconsistently with NAT Gateways
affects_2.1 aws bug_report cloud waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> ec2_vpc_route_table ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> This, but hardly related to the module. ``` [defaults] roles_path = ./roles retry_files_enabled = False ssh_port = 22 host_key_checking = False ``` ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Running the scripts from OSX El Capitan 10.11.5 ##### SUMMARY <!--- Explain the problem briefly --> Very briefly: sometimes a route to NAT is created, sometimes not. Less brief: I have created NAT gateways for my private subnets and now I'm trying to create routes and route table to direct internet traffic towards the NAT instances. When I run the playbook and go check the results I find that the route to NAT Gateway sometimes appears in my route tables and sometimes not. Also if I try to set two route tables (in separate tasks or `with_items`) sometimes the other route table gets the route and the other doesn't and there is no logic how it happens. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> Run this repeatedly and you will observe that it reports as changed everytime (I don't think it should) and that your route table sometimes has the route and sometimes not (check this from your AWS console). ``` - ec2_vpc_route_table: region: eu-west-1 tags: Name: private-app-network-1 state: present propagating_vgw_ids: [] vpc_id: "{{ vpc.vpc_id }}" subnets: - "{{ private_subnets }}" routes: - dest: 0.0.0.0/0 gateway_id: nat-123456 ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> I expect to see the correct routes displayed in AWS Console after each playbook run. Also if I run the play multiple times, I don't expect the task to report as changed if I haven't changed anything. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> Routes appear and disappear between runs even though nothing has changed in the playbook. <!--- Paste verbatim command output between quotes below --> ``` ``` #### WORKAROUND In the end I just used this module to setup an empty route table and then add the routes to NAT with AWS CLI ([create-route](http://docs.aws.amazon.com/cli/latest/reference/ec2/create-route.html)). It works but it's not a very nice solution.
True
ec2_vpc_route_table working inconsistently with NAT Gateways - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> ec2_vpc_route_table ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> This, but hardly related to the module. ``` [defaults] roles_path = ./roles retry_files_enabled = False ssh_port = 22 host_key_checking = False ``` ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Running the scripts from OSX El Capitan 10.11.5 ##### SUMMARY <!--- Explain the problem briefly --> Very briefly: sometimes a route to NAT is created, sometimes not. Less brief: I have created NAT gateways for my private subnets and now I'm trying to create routes and route table to direct internet traffic towards the NAT instances. When I run the playbook and go check the results I find that the route to NAT Gateway sometimes appears in my route tables and sometimes not. Also if I try to set two route tables (in separate tasks or `with_items`) sometimes the other route table gets the route and the other doesn't and there is no logic how it happens. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> Run this repeatedly and you will observe that it reports as changed everytime (I don't think it should) and that your route table sometimes has the route and sometimes not (check this from your AWS console). ``` - ec2_vpc_route_table: region: eu-west-1 tags: Name: private-app-network-1 state: present propagating_vgw_ids: [] vpc_id: "{{ vpc.vpc_id }}" subnets: - "{{ private_subnets }}" routes: - dest: 0.0.0.0/0 gateway_id: nat-123456 ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> I expect to see the correct routes displayed in AWS Console after each playbook run. Also if I run the play multiple times, I don't expect the task to report as changed if I haven't changed anything. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> Routes appear and disappear between runs even though nothing has changed in the playbook. <!--- Paste verbatim command output between quotes below --> ``` ``` #### WORKAROUND In the end I just used this module to setup an empty route table and then add the routes to NAT with AWS CLI ([create-route](http://docs.aws.amazon.com/cli/latest/reference/ec2/create-route.html)). It works but it's not a very nice solution.
main
vpc route table working inconsistently with nat gateways issue type bug report component name vpc route table ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables this but hardly related to the module roles path roles retry files enabled false ssh port host key checking false os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running the scripts from osx el capitan summary very briefly sometimes a route to nat is created sometimes not less brief i have created nat gateways for my private subnets and now i m trying to create routes and route table to direct internet traffic towards the nat instances when i run the playbook and go check the results i find that the route to nat gateway sometimes appears in my route tables and sometimes not also if i try to set two route tables in separate tasks or with items sometimes the other route table gets the route and the other doesn t and there is no logic how it happens steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run this repeatedly and you will observe that it reports as changed everytime i don t think it should and that your route table sometimes has the route and sometimes not check this from your aws console vpc route table region eu west tags name private app network state present propagating vgw ids vpc id vpc vpc id subnets private subnets routes dest gateway id nat expected results i expect to see the correct routes displayed in aws console after each playbook run also if i run the play multiple times i don t expect the task to report as changed if i haven t changed anything actual results routes appear and disappear between runs even though nothing has changed in the playbook workaround in the end i just used this module to setup an empty route table and then add the routes to nat with aws cli it works but it s not a very nice solution
1
126,309
4,988,494,550
IssuesEvent
2016-12-08 08:34:51
PowerlineApp/powerline-mobile
https://api.github.com/repos/PowerlineApp/powerline-mobile
closed
Slow Manual E-mail Registration
bug P3 - Low Priority
Currently, the user fills out the registration form and hits submit. User is currently waiting 15s-30s before tour guide is shown. We either need to speed this process up or show the user something that displays progress. For example an animated list "We are linking you to: Your elected leaders at the local level. Your elected leaders at the state level. Your elected leaders at the national level. Your local group. Your state group. Your federal group." Checkmarks or something...
1.0
Slow Manual E-mail Registration - Currently, the user fills out the registration form and hits submit. User is currently waiting 15s-30s before tour guide is shown. We either need to speed this process up or show the user something that displays progress. For example an animated list "We are linking you to: Your elected leaders at the local level. Your elected leaders at the state level. Your elected leaders at the national level. Your local group. Your state group. Your federal group." Checkmarks or something...
non_main
slow manual e mail registration currently the user fills out the registration form and hits submit user is currently waiting before tour guide is shown we either need to speed this process up or show the user something that displays progress for example an animated list we are linking you to your elected leaders at the local level your elected leaders at the state level your elected leaders at the national level your local group your state group your federal group checkmarks or something
0
3,841
16,745,747,461
IssuesEvent
2021-06-11 15:18:01
truecharts/apps
https://api.github.com/repos/truecharts/apps
closed
Add unpackerr app
New App Request No-Maintainer
Please add the app unpackerr https://github.com/davidnewhall/unpackerr. It allow automated unpacking of completed torrents from Sonarr, Radarr and Readerr.
True
Add unpackerr app - Please add the app unpackerr https://github.com/davidnewhall/unpackerr. It allow automated unpacking of completed torrents from Sonarr, Radarr and Readerr.
main
add unpackerr app please add the app unpackerr it allow automated unpacking of completed torrents from sonarr radarr and readerr
1
1,317
5,654,359,369
IssuesEvent
2017-04-09 07:52:05
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
opened
Enable Pylint warnings
Difficulty-easy maintainability
This is a list of good sounding pylint warnings we might want to enable - [ ] arguments-differ, - [ ] assignment-from-none, - [ ] bad-builtin, - [ ] bad-indentation, - [ ] bad-super-call, - [ ] bare-except, - [ ] basestring-builtin, - [ ] broad-except, - [ ] deprecated-lambda, - [ ] expression-not-assigned, - [ ] filter-builtin-not-iterating, - [ ] function-redefined, - [ ] indexing-exception, - [ ] map-builtin-not-iterating, - [ ] next-method-called, - [x] no-absolute-import, #1294 - [x] old-division, #1293 - [ ] print-statement, - [ ] property-on-old-class, - [ ] range-builtin-not-iterating, - [ ] redefined-builtin, - [ ] redefined-outer-name, - [ ] reduce-builtin, - [ ] reimported, - [x] relative-import, #1294 - [ ] round-builtin, - [ ] super-init-not-called, - [ ] unnecessary-lambda, - [ ] unnecessary-semicolon, - [ ] unpacking-non-sequence, - [ ] unused-argument, - [ ] unused-import, - [ ] unused-variable, - [ ] unused-wildcard-import, - [ ] used-before-assignment, - [ ] wildcard-import, - [ ] xrange-builtin, - [ ] zip-builtin-not-iterating,
True
Enable Pylint warnings - This is a list of good sounding pylint warnings we might want to enable - [ ] arguments-differ, - [ ] assignment-from-none, - [ ] bad-builtin, - [ ] bad-indentation, - [ ] bad-super-call, - [ ] bare-except, - [ ] basestring-builtin, - [ ] broad-except, - [ ] deprecated-lambda, - [ ] expression-not-assigned, - [ ] filter-builtin-not-iterating, - [ ] function-redefined, - [ ] indexing-exception, - [ ] map-builtin-not-iterating, - [ ] next-method-called, - [x] no-absolute-import, #1294 - [x] old-division, #1293 - [ ] print-statement, - [ ] property-on-old-class, - [ ] range-builtin-not-iterating, - [ ] redefined-builtin, - [ ] redefined-outer-name, - [ ] reduce-builtin, - [ ] reimported, - [x] relative-import, #1294 - [ ] round-builtin, - [ ] super-init-not-called, - [ ] unnecessary-lambda, - [ ] unnecessary-semicolon, - [ ] unpacking-non-sequence, - [ ] unused-argument, - [ ] unused-import, - [ ] unused-variable, - [ ] unused-wildcard-import, - [ ] used-before-assignment, - [ ] wildcard-import, - [ ] xrange-builtin, - [ ] zip-builtin-not-iterating,
main
enable pylint warnings this is a list of good sounding pylint warnings we might want to enable arguments differ assignment from none bad builtin bad indentation bad super call bare except basestring builtin broad except deprecated lambda expression not assigned filter builtin not iterating function redefined indexing exception map builtin not iterating next method called no absolute import old division print statement property on old class range builtin not iterating redefined builtin redefined outer name reduce builtin reimported relative import round builtin super init not called unnecessary lambda unnecessary semicolon unpacking non sequence unused argument unused import unused variable unused wildcard import used before assignment wildcard import xrange builtin zip builtin not iterating
1
4,551
23,709,613,658
IssuesEvent
2022-08-30 06:37:21
kjaymiller/Python-Community-News
https://api.github.com/repos/kjaymiller/Python-Community-News
opened
[Topic]: Starlite looking for Maintainers and Contributors
Content maintainers
### URL https://www.reddit.com/r/Python/comments/wz07o3/starlite_is_looking_for_contributors_and/ ### When was this post released 26 Aug 2022 ### Summary The maintainer of Starlite made a plea to the community looking for maintainers and organizers. This seemed to receive positive feedback as many folks offered to help. The maintainer claimed that: > it's a core pillar of Starlite to have multiple maintainers and be as open, inviting and accessible for contributions as we can be. I wonder if this is a move that others can make in the future ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
True
[Topic]: Starlite looking for Maintainers and Contributors - ### URL https://www.reddit.com/r/Python/comments/wz07o3/starlite_is_looking_for_contributors_and/ ### When was this post released 26 Aug 2022 ### Summary The maintainer of Starlite made a plea to the community looking for maintainers and organizers. This seemed to receive positive feedback as many folks offered to help. The maintainer claimed that: > it's a core pillar of Starlite to have multiple maintainers and be as open, inviting and accessible for contributions as we can be. I wonder if this is a move that others can make in the future ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
main
starlite looking for maintainers and contributors url when was this post released aug summary the maintainer of starlite made a plea to the community looking for maintainers and organizers this seemed to receive positive feedback as many folks offered to help the maintainer claimed that it s a core pillar of starlite to have multiple maintainers and be as open inviting and accessible for contributions as we can be i wonder if this is a move that others can make in the future code of conduct i agree to follow this project s code of conduct
1
100,697
8,752,749,155
IssuesEvent
2018-12-14 04:59:31
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
reopened
Testing 14 : ApiV1ProjectsIdSearchAutoSuggestionsSearchStatusGetQueryParamPagesizeDdos
Testing 14
Project : Testing 14 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZGU1NDg4N2QtMTRhYi00NzJhLWI0NTItOGI5NjFlNWYwYzdj; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 14 Dec 2018 04:55:59 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/dgEvhGHV/search-auto-suggestions/search/dgEvhGHV?pageSize=1001 Request : Response : { "timestamp" : "2018-12-14T04:56:00.237+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/dgEvhGHV/search-auto-suggestions/search/dgEvhGHV" } Logs : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
1.0
Testing 14 : ApiV1ProjectsIdSearchAutoSuggestionsSearchStatusGetQueryParamPagesizeDdos - Project : Testing 14 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZGU1NDg4N2QtMTRhYi00NzJhLWI0NTItOGI5NjFlNWYwYzdj; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 14 Dec 2018 04:55:59 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/dgEvhGHV/search-auto-suggestions/search/dgEvhGHV?pageSize=1001 Request : Response : { "timestamp" : "2018-12-14T04:56:00.237+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/dgEvhGHV/search-auto-suggestions/search/dgEvhGHV" } Logs : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
non_main
testing project testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api projects dgevhghv search auto suggestions search dgevhghv logs assertion resolved to result assertion resolved to result fx bot
0
219,563
17,099,974,170
IssuesEvent
2021-07-09 09:49:11
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
closed
Gutenberg 10.62 background bug
Needs Testing [Status] Needs More Info
![20210525175551](https://user-images.githubusercontent.com/77946791/119478687-9483dd00-bd82-11eb-972b-ed6fdbd72ac2.jpg) ![20210525175617](https://user-images.githubusercontent.com/77946791/119478690-95b50a00-bd82-11eb-8874-ea93154b8857.png) After upgrading to the latest version, Mobile and tablet modes will have a gray overlay background.
1.0
Gutenberg 10.62 background bug - ![20210525175551](https://user-images.githubusercontent.com/77946791/119478687-9483dd00-bd82-11eb-972b-ed6fdbd72ac2.jpg) ![20210525175617](https://user-images.githubusercontent.com/77946791/119478690-95b50a00-bd82-11eb-8874-ea93154b8857.png) After upgrading to the latest version, Mobile and tablet modes will have a gray overlay background.
non_main
gutenberg background bug after upgrading to the latest version mobile and tablet modes will have a gray overlay background
0
43,680
9,479,261,365
IssuesEvent
2019-04-20 06:36:32
mozilla-mobile/android-components
https://api.github.com/repos/mozilla-mobile/android-components
closed
LocaleSettingUpdater uses API 24 method
<engine-gecko> ⌨️ code
https://github.com/mozilla-mobile/android-components/blob/6f95e8063bfad1636cdc2feeb0f7f4a00f3b9dac/components/browser/engine-gecko-nightly/src/main/java/mozilla/components/browser/engine/gecko/integration/LocaleSettingUpdater.kt#L45 In `LocaleSettingUpdater` we are using [LocaleList.getAdjustedDefault()](https://developer.android.com/reference/android/os/LocaleList.html#getAdjustedDefault()). This seems to be available from API 24 - and our minSdkVersion is 21.
1.0
LocaleSettingUpdater uses API 24 method - https://github.com/mozilla-mobile/android-components/blob/6f95e8063bfad1636cdc2feeb0f7f4a00f3b9dac/components/browser/engine-gecko-nightly/src/main/java/mozilla/components/browser/engine/gecko/integration/LocaleSettingUpdater.kt#L45 In `LocaleSettingUpdater` we are using [LocaleList.getAdjustedDefault()](https://developer.android.com/reference/android/os/LocaleList.html#getAdjustedDefault()). This seems to be available from API 24 - and our minSdkVersion is 21.
non_main
localesettingupdater uses api method in localesettingupdater we are using this seems to be available from api and our minsdkversion is
0
900
4,560,890,001
IssuesEvent
2016-09-14 09:39:41
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
consul_session: consul.session.create() requires a lock_delay without units
affects_2.0 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME consul_session ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ``` pip freeze | grep consul python-consul==0.6.0 ``` ##### SUMMARY When creating a consul session, the module requires passing a unit with the `lock_delay` parameter. This is not a valid option in the [underlying python-consul module](http://python-consul.readthedocs.io/en/latest/#consul.base.Consul.Session), which states `lock_delay is an integer of seconds`. Fix here: https://github.com/quantopian/ansible-modules-extras/commit/763c8f75c19b926618d1a066ec792c058872fd43 ##### STEPS TO REPRODUCE ```yaml - name: Register new session consul_session: name: "{{ consul_session_name }}" ``` `ansible-playbook -i 'localhost,' -c local consul.yml` ##### EXPECTED RESULTS - successful playbook execution - successful creation of a named session in consul ##### ACTUAL RESULTS ``` fatal: [ec2.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"checks": null, "datacenter": null, "delay": "15s", "host": "localhost", "id": null, "name": "foo", "node": null, "port": 8500, "state": "present"}, "module_name": "consul_session"}, "msg": "Could not create/update session No JSON object could be decoded"} ``` A local debug session provided the error: ``` [10] > /usr/lib/python2.7/json/decoder.py(384)raw_decode() -> raise ValueError("No JSON object could be decoded") (Pdb++) locals() {'s': u'Request decode failed: time: unknown unit ss in duration 15ss', 'self': <json.decoder.JSONDecoder object at 0x7f9a3bb48ed0>, 'idx': 0} ```
True
consul_session: consul.session.create() requires a lock_delay without units - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME consul_session ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT ``` pip freeze | grep consul python-consul==0.6.0 ``` ##### SUMMARY When creating a consul session, the module requires passing a unit with the `lock_delay` parameter. This is not a valid option in the [underlying python-consul module](http://python-consul.readthedocs.io/en/latest/#consul.base.Consul.Session), which states `lock_delay is an integer of seconds`. Fix here: https://github.com/quantopian/ansible-modules-extras/commit/763c8f75c19b926618d1a066ec792c058872fd43 ##### STEPS TO REPRODUCE ```yaml - name: Register new session consul_session: name: "{{ consul_session_name }}" ``` `ansible-playbook -i 'localhost,' -c local consul.yml` ##### EXPECTED RESULTS - successful playbook execution - successful creation of a named session in consul ##### ACTUAL RESULTS ``` fatal: [ec2.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"checks": null, "datacenter": null, "delay": "15s", "host": "localhost", "id": null, "name": "foo", "node": null, "port": 8500, "state": "present"}, "module_name": "consul_session"}, "msg": "Could not create/update session No JSON object could be decoded"} ``` A local debug session provided the error: ``` [10] > /usr/lib/python2.7/json/decoder.py(384)raw_decode() -> raise ValueError("No JSON object could be decoded") (Pdb++) locals() {'s': u'Request decode failed: time: unknown unit ss in duration 15ss', 'self': <json.decoder.JSONDecoder object at 0x7f9a3bb48ed0>, 'idx': 0} ```
main
consul session consul session create requires a lock delay without units issue type bug report component name consul session ansible version ansible config file configured module search path default w o overrides configuration n a os environment pip freeze grep consul python consul summary when creating a consul session the module requires passing a unit with the lock delay parameter this is not a valid option in the which states lock delay is an integer of seconds fix here steps to reproduce yaml name register new session consul session name consul session name ansible playbook i localhost c local consul yml expected results successful playbook execution successful creation of a named session in consul actual results fatal failed changed false failed true invocation module args checks null datacenter null delay host localhost id null name foo node null port state present module name consul session msg could not create update session no json object could be decoded a local debug session provided the error usr lib json decoder py raw decode raise valueerror no json object could be decoded pdb locals s u request decode failed time unknown unit ss in duration self idx
1
4,569
23,748,864,512
IssuesEvent
2022-08-31 18:34:16
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
!GetAtt AWS::SQS::Queue.Arn in Lambda Metadata.DockerBuildArgs produce invalid string
type/bug stage/bug-repro maintainer/need-followup
### Description: I am trying to pass queue url to container image on lambda. After running of `sam build` wrong argument is passed to docker. I have the next resources definesd ```yaml Parameters: RunningEnvironment: Type: String AllowedValues: - development - production - staging Description: Enter which environment you like to deploy. Resources: TestLambda: Type: AWS::Serverless::Function Properties: FunctionName: !Sub "test-${RunningEnvironment}" Role: tets-role PackageType: Image MemorySize: 512 Timeout: 30 Metadata: DockerTag: !Ref RunningEnvironment DockerContext: test/. DockerBuildArgs: ENVIRONMENT: !Ref RunningEnvironment RESULTS_QUEUE_URL: !GetAtt ResultsQueue.Arn ResultsQueue: Type: AWS::SQS::Queue Properties: FifoQueue: true QueueName: !Sub "results-${RunningEnvironment}.fifo" VisibilityTimeout: 360 ``` ### Steps to reproduce: * copy this template.yaml * run `sam build --parameter-overrides 'RunningEnvironment=staging'` (you don't need to have actual directories created) * you will get prined something like this: ``` Building codeuri: . runtime: None metadata: {'DockerTag': 'staging', 'DockerContext': 'test/.', 'DockerBuildArgs': {'ENVIRONMENT': 'staging', 'RESULTS_QUEUE_URL': 'arn:aws:lambda:us-east-1:123456789012:function:ResultsQueue'}} functions: ['TestLambda'] ``` * RESULTS_QUEUE_URL argument has an invalid value: `arn:aws:lambda:us-east-1:123456789012:function:ResultsQueue` ### Observed result: <!-- Please provide command output with `--debug` flag set. --> It produces ARN to some unexpected lambda funtion, though it has to be something like: `arn:aws:sqs:your_region:your_account_number:results-staging.fifo` Also it seems to set default region: `eu-east-1` and default account number as `123456789012`. Also !Ref is not evaluating result. ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: macOS BigSur 11.2.2 2. `sam --version`: SAM CLI, version 1.15.0 3. AWS region: eu-west-1
True
!GetAtt AWS::SQS::Queue.Arn in Lambda Metadata.DockerBuildArgs produce invalid string - ### Description: I am trying to pass queue url to container image on lambda. After running of `sam build` wrong argument is passed to docker. I have the next resources definesd ```yaml Parameters: RunningEnvironment: Type: String AllowedValues: - development - production - staging Description: Enter which environment you like to deploy. Resources: TestLambda: Type: AWS::Serverless::Function Properties: FunctionName: !Sub "test-${RunningEnvironment}" Role: tets-role PackageType: Image MemorySize: 512 Timeout: 30 Metadata: DockerTag: !Ref RunningEnvironment DockerContext: test/. DockerBuildArgs: ENVIRONMENT: !Ref RunningEnvironment RESULTS_QUEUE_URL: !GetAtt ResultsQueue.Arn ResultsQueue: Type: AWS::SQS::Queue Properties: FifoQueue: true QueueName: !Sub "results-${RunningEnvironment}.fifo" VisibilityTimeout: 360 ``` ### Steps to reproduce: * copy this template.yaml * run `sam build --parameter-overrides 'RunningEnvironment=staging'` (you don't need to have actual directories created) * you will get prined something like this: ``` Building codeuri: . runtime: None metadata: {'DockerTag': 'staging', 'DockerContext': 'test/.', 'DockerBuildArgs': {'ENVIRONMENT': 'staging', 'RESULTS_QUEUE_URL': 'arn:aws:lambda:us-east-1:123456789012:function:ResultsQueue'}} functions: ['TestLambda'] ``` * RESULTS_QUEUE_URL argument has an invalid value: `arn:aws:lambda:us-east-1:123456789012:function:ResultsQueue` ### Observed result: <!-- Please provide command output with `--debug` flag set. --> It produces ARN to some unexpected lambda funtion, though it has to be something like: `arn:aws:sqs:your_region:your_account_number:results-staging.fifo` Also it seems to set default region: `eu-east-1` and default account number as `123456789012`. Also !Ref is not evaluating result. ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: macOS BigSur 11.2.2 2. `sam --version`: SAM CLI, version 1.15.0 3. AWS region: eu-west-1
main
getatt aws sqs queue arn in lambda metadata dockerbuildargs produce invalid string description i am trying to pass queue url to container image on lambda after running of sam build wrong argument is passed to docker i have the next resources definesd yaml parameters runningenvironment type string allowedvalues development production staging description enter which environment you like to deploy resources testlambda type aws serverless function properties functionname sub test runningenvironment role tets role packagetype image memorysize timeout metadata dockertag ref runningenvironment dockercontext test dockerbuildargs environment ref runningenvironment results queue url getatt resultsqueue arn resultsqueue type aws sqs queue properties fifoqueue true queuename sub results runningenvironment fifo visibilitytimeout steps to reproduce copy this template yaml run sam build parameter overrides runningenvironment staging you don t need to have actual directories created you will get prined something like this building codeuri runtime none metadata dockertag staging dockercontext test dockerbuildargs environment staging results queue url arn aws lambda us east function resultsqueue functions results queue url argument has an invalid value arn aws lambda us east function resultsqueue observed result it produces arn to some unexpected lambda funtion though it has to be something like arn aws sqs your region your account number results staging fifo also it seems to set default region eu east and default account number as also ref is not evaluating result additional environment details ex windows mac amazon linux etc os macos bigsur sam version sam cli version aws region eu west
1
282,190
21,315,470,744
IssuesEvent
2022-04-16 07:34:51
tzhan98/pe
https://api.github.com/repos/tzhan98/pe
opened
Use cases missing in DG
type.DocumentationBug severity.Medium
Several notable and important use cases are missing such as 1. Adding user 2. Changing job status (vacant/filled) 3. Searching for user with sort/find <!--session: 1650087556122-ba4c5855-3509-40b9-8aa5-1fa46035e0a7--> <!--Version: Web v3.4.2-->
1.0
Use cases missing in DG - Several notable and important use cases are missing such as 1. Adding user 2. Changing job status (vacant/filled) 3. Searching for user with sort/find <!--session: 1650087556122-ba4c5855-3509-40b9-8aa5-1fa46035e0a7--> <!--Version: Web v3.4.2-->
non_main
use cases missing in dg several notable and important use cases are missing such as adding user changing job status vacant filled searching for user with sort find
0
1,920
6,586,342,485
IssuesEvent
2017-09-13 16:54:36
duckduckgo/zeroclickinfo-fathead
https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead
closed
PerlDoc: Ensure only one article exists for functions
Maintainer Submitted Programming Mission Skill: Perl Status: Tolerated Topic: Perl
For some functions (e.g., `xor`), a disambiguation page comes up as there is the 'operator' and the 'function' (which just mentions the operator page): One disambiguation page leads to the correct description, but the other leads to a description that just mentions 'See perlop'. https://duckduckgo.com/?q=xor+perl&ia=meanings --- IA Page: http://duck.co/ia/view/perl_doc [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @GuiltyDolphin
True
PerlDoc: Ensure only one article exists for functions - For some functions (e.g., `xor`), a disambiguation page comes up as there is the 'operator' and the 'function' (which just mentions the operator page): One disambiguation page leads to the correct description, but the other leads to a description that just mentions 'See perlop'. https://duckduckgo.com/?q=xor+perl&ia=meanings --- IA Page: http://duck.co/ia/view/perl_doc [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @GuiltyDolphin
main
perldoc ensure only one article exists for functions for some functions e g xor a disambiguation page comes up as there is the operator and the function which just mentions the operator page one disambiguation page leads to the correct description but the other leads to a description that just mentions see perlop ia page guiltydolphin
1
4,808
24,763,060,244
IssuesEvent
2022-10-22 06:25:56
usefulmove/comp
https://api.github.com/repos/usefulmove/comp
opened
Type-independent command generator helpers ( ? )
enhancement help wanted maintainability
It should be possible to construct the command generator helper function family (`commandgen()`, `pop_stack...()`, value parsing, etc.) to be type-independent [T]. This could significantly improve code structure and enhance maintainability.
True
Type-independent command generator helpers ( ? ) - It should be possible to construct the command generator helper function family (`commandgen()`, `pop_stack...()`, value parsing, etc.) to be type-independent [T]. This could significantly improve code structure and enhance maintainability.
main
type independent command generator helpers it should be possible to construct the command generator helper function family commandgen pop stack value parsing etc to be type independent this could significantly improve code structure and enhance maintainability
1
4,878
25,035,475,205
IssuesEvent
2022-11-04 15:40:41
jesus2099/konami-command
https://api.github.com/repos/jesus2099/konami-command
opened
Post MBS server changes cleanup
ninja server change mb_COOL-ENTITY-LINKS minor maintainability
Multiple MBS changes occurred since this script has been written. - Entity icons are now shown in relationships by [MBS-2421](https://tickets.metabrainz.org/browse/MBS-2421) - Annotations now have a proper `.annotation-body` class on `div` in main pages, on `span` in edit pages - Edit notes now have a proper `.edit-note-text` class
True
Post MBS server changes cleanup - Multiple MBS changes occurred since this script has been written. - Entity icons are now shown in relationships by [MBS-2421](https://tickets.metabrainz.org/browse/MBS-2421) - Annotations now have a proper `.annotation-body` class on `div` in main pages, on `span` in edit pages - Edit notes now have a proper `.edit-note-text` class
main
post mbs server changes cleanup multiple mbs changes occurred since this script has been written entity icons are now shown in relationships by annotations now have a proper annotation body class on div in main pages on span in edit pages edit notes now have a proper edit note text class
1
53,828
13,262,361,108
IssuesEvent
2020-08-20 21:39:57
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
wimpsim-reader - example.py error (Trac #2154)
Migrated from Trac combo simulation defect
In [source:IceCube/projects/wimpsim-reader/trunk/resources/examples/example.py#L19] at line 19 there is a likely typing error: ```text 19 outfile = os.path.expandars("$I3_BUILD/wimpsim-reader/resources/example.i3.bz2") ``` I get this error: ```text AttributeError: 'module' object has no attribute 'expandars' ``` I suppose it should be ```expandvars``` instead of ```expandars``` . <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2154">https://code.icecube.wisc.edu/projects/icecube/ticket/2154</a>, reported by grenziand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2018-05-17T15:58:32", "_ts": "1526572712892430", "description": "In [source:IceCube/projects/wimpsim-reader/trunk/resources/examples/example.py#L19]\n\nat line 19 there is a likely typing error:\n\n\n{{{\n19\toutfile = os.path.expandars(\"$I3_BUILD/wimpsim-reader/resources/example.i3.bz2\")\n}}}\n\nI get this error:\n\n{{{\nAttributeError: 'module' object has no attribute 'expandars'\n}}}\n\n\nI suppose it should be {{{expandvars}}} instead of {{{expandars}}} .\n\n\n", "reporter": "grenzi", "cc": "", "resolution": "fixed", "time": "2018-05-17T15:18:51", "component": "combo simulation", "summary": "wimpsim-reader - example.py error", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
wimpsim-reader - example.py error (Trac #2154) - In [source:IceCube/projects/wimpsim-reader/trunk/resources/examples/example.py#L19] at line 19 there is a likely typing error: ```text 19 outfile = os.path.expandars("$I3_BUILD/wimpsim-reader/resources/example.i3.bz2") ``` I get this error: ```text AttributeError: 'module' object has no attribute 'expandars' ``` I suppose it should be ```expandvars``` instead of ```expandars``` . <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2154">https://code.icecube.wisc.edu/projects/icecube/ticket/2154</a>, reported by grenziand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2018-05-17T15:58:32", "_ts": "1526572712892430", "description": "In [source:IceCube/projects/wimpsim-reader/trunk/resources/examples/example.py#L19]\n\nat line 19 there is a likely typing error:\n\n\n{{{\n19\toutfile = os.path.expandars(\"$I3_BUILD/wimpsim-reader/resources/example.i3.bz2\")\n}}}\n\nI get this error:\n\n{{{\nAttributeError: 'module' object has no attribute 'expandars'\n}}}\n\n\nI suppose it should be {{{expandvars}}} instead of {{{expandars}}} .\n\n\n", "reporter": "grenzi", "cc": "", "resolution": "fixed", "time": "2018-05-17T15:18:51", "component": "combo simulation", "summary": "wimpsim-reader - example.py error", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
non_main
wimpsim reader example py error trac in at line there is a likely typing error text outfile os path expandars build wimpsim reader resources example i get this error text attributeerror module object has no attribute expandars i suppose it should be expandvars instead of expandars migrated from json status closed changetime ts description in n nat line there is a likely typing error n n n toutfile os path expandars build wimpsim reader resources example n n ni get this error n n nattributeerror module object has no attribute expandars n n n ni suppose it should be expandvars instead of expandars n n n reporter grenzi cc resolution fixed time component combo simulation summary wimpsim reader example py error priority normal keywords milestone owner nega type defect
0
111,321
4,468,254,703
IssuesEvent
2016-08-25 08:47:35
orbisgis/h2gis
https://api.github.com/repos/orbisgis/h2gis
closed
Snapshot jenkins
enhancement Priority Normal
It will be nice to add a zip file on jenkins with the last H2GIS snapshot. @nicolas-f , @SPalominos
1.0
Snapshot jenkins - It will be nice to add a zip file on jenkins with the last H2GIS snapshot. @nicolas-f , @SPalominos
non_main
snapshot jenkins it will be nice to add a zip file on jenkins with the last snapshot nicolas f spalominos
0
3,402
13,181,790,917
IssuesEvent
2020-08-12 14:50:07
duo-labs/cloudmapper
https://api.github.com/repos/duo-labs/cloudmapper
closed
Problem displaying ECS resources
map unmaintained_functionality
Hey, I am not sure if this is a good place to ask for a bit of help, but here goes... My VPC should, among other things, include: - 2 load balancers - 2 databases - 2 ECS clusters with 2 services and 2 tasks - running on fargate However - anything related to ECS is not shown. I have checked the account-data folder and everything is there. At first, I thought its an issue with security groups between load balancer and ECS so I have changed the security groups to this: - load balancer egress rule only allows traffic to ecs task security group - ecs task ingress rule only allows traffic from load balancer security group Collected, prepared the data again (even removed everything from account-data) and it didnt change anything. I am 100% sure the problem is somewhere on my end. I have tried cloudcraft, cloudviz and lucidchart as an alternative just to see if they would work and they have the same problem. Some of them do not support fargate afaik. Is there something else I can do?
True
Problem displaying ECS resources - Hey, I am not sure if this is a good place to ask for a bit of help, but here goes... My VPC should, among other things, include: - 2 load balancers - 2 databases - 2 ECS clusters with 2 services and 2 tasks - running on fargate However - anything related to ECS is not shown. I have checked the account-data folder and everything is there. At first, I thought its an issue with security groups between load balancer and ECS so I have changed the security groups to this: - load balancer egress rule only allows traffic to ecs task security group - ecs task ingress rule only allows traffic from load balancer security group Collected, prepared the data again (even removed everything from account-data) and it didnt change anything. I am 100% sure the problem is somewhere on my end. I have tried cloudcraft, cloudviz and lucidchart as an alternative just to see if they would work and they have the same problem. Some of them do not support fargate afaik. Is there something else I can do?
main
problem displaying ecs resources hey i am not sure if this is a good place to ask for a bit of help but here goes my vpc should among other things include load balancers databases ecs clusters with services and tasks running on fargate however anything related to ecs is not shown i have checked the account data folder and everything is there at first i thought its an issue with security groups between load balancer and ecs so i have changed the security groups to this load balancer egress rule only allows traffic to ecs task security group ecs task ingress rule only allows traffic from load balancer security group collected prepared the data again even removed everything from account data and it didnt change anything i am sure the problem is somewhere on my end i have tried cloudcraft cloudviz and lucidchart as an alternative just to see if they would work and they have the same problem some of them do not support fargate afaik is there something else i can do
1
4,086
19,296,171,693
IssuesEvent
2021-12-12 16:21:01
gorilla/mux
https://api.github.com/repos/gorilla/mux
opened
⚠️ The Gorilla Toolkit is Looking for a New Maintainer
help wanted waiting on new maintainer
The Gorilla Toolkit is looking for a new maintainer (or maintainers, plural). As the last standing maintainer of the project, I no longer have time to fully dedicate to maintaining the libraries here, and The major libraries - **mux** (https://github.com/gorilla/mux), **schema** (https://github.com/gorilla/schema), **handlers** (https://github.com/gorilla/handlers), and **sessions** (https://github.com/gorilla/sessions), are all reasonably mature libraries, but ongoing stewardship around bug triage, feature enhancements, and potential "version 2.0s" are all possibilities. * Have a demonstrated history of OSS contributions. This is important, as you need to be trustworthy: _no_ maintainer is better than an adversarial maintainer! * Ideally, you actively contribute for 3-6 months, I merge after you review, and you gain the commit bit on the relevant repos after that period and/or active engagement on your part. * I transition you to admin of the project. > Note: I don't expect this to be quick or easy - the **websocket** library, with 16k stars & 15k unique clones per week, has been [looking for a new maintainer](https://github.com/gorilla/websocket/issues/370) 3.5+ years, and has yet to have anyone reliably stick. If I don't have any luck finding new maintainer(s) in the next 6 months or so, it's likely I'll mark these projects as in maintenance mode only and [archive](https://docs.github.com/en/repositories/archiving-a-github-repository/archiving-repositories) the repos. Please keep the replies on-topic.
True
⚠️ The Gorilla Toolkit is Looking for a New Maintainer - The Gorilla Toolkit is looking for a new maintainer (or maintainers, plural). As the last standing maintainer of the project, I no longer have time to fully dedicate to maintaining the libraries here, and The major libraries - **mux** (https://github.com/gorilla/mux), **schema** (https://github.com/gorilla/schema), **handlers** (https://github.com/gorilla/handlers), and **sessions** (https://github.com/gorilla/sessions), are all reasonably mature libraries, but ongoing stewardship around bug triage, feature enhancements, and potential "version 2.0s" are all possibilities. * Have a demonstrated history of OSS contributions. This is important, as you need to be trustworthy: _no_ maintainer is better than an adversarial maintainer! * Ideally, you actively contribute for 3-6 months, I merge after you review, and you gain the commit bit on the relevant repos after that period and/or active engagement on your part. * I transition you to admin of the project. > Note: I don't expect this to be quick or easy - the **websocket** library, with 16k stars & 15k unique clones per week, has been [looking for a new maintainer](https://github.com/gorilla/websocket/issues/370) 3.5+ years, and has yet to have anyone reliably stick. If I don't have any luck finding new maintainer(s) in the next 6 months or so, it's likely I'll mark these projects as in maintenance mode only and [archive](https://docs.github.com/en/repositories/archiving-a-github-repository/archiving-repositories) the repos. Please keep the replies on-topic.
main
⚠️ the gorilla toolkit is looking for a new maintainer the gorilla toolkit is looking for a new maintainer or maintainers plural as the last standing maintainer of the project i no longer have time to fully dedicate to maintaining the libraries here and the major libraries mux schema handlers and sessions are all reasonably mature libraries but ongoing stewardship around bug triage feature enhancements and potential version are all possibilities have a demonstrated history of oss contributions this is important as you need to be trustworthy no maintainer is better than an adversarial maintainer ideally you actively contribute for months i merge after you review and you gain the commit bit on the relevant repos after that period and or active engagement on your part i transition you to admin of the project note i don t expect this to be quick or easy the websocket library with stars unique clones per week has been years and has yet to have anyone reliably stick if i don t have any luck finding new maintainer s in the next months or so it s likely i ll mark these projects as in maintenance mode only and the repos please keep the replies on topic
1
5,357
26,979,188,287
IssuesEvent
2023-02-09 11:50:28
backdrop-ops/contrib
https://api.github.com/repos/backdrop-ops/contrib
closed
Contrib application: kiamlaluno (graphicsmagick module)
Maintainer application
I would like to contribute a project. **(option 1) The name of your module, theme, or layout** GraphisMagick module ## (option 1) Please note these 3 requirements for new contrib projects: - [x] Include a README.md file containing license and maintainer information. You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md - [x] Include a LICENSE.txt file. You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt. - [x] If porting a Drupal 7 project, Maintain the Git history from Drupal. **Post a link to your new Backdrop project under your own GitHub account (option 1)** https://github.com/kiamlaluno/graphicsmagick **If you have chosen option 2 or 1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)?** Yes, I agree. <!-- (option 1) Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. --> <!-- (option 1) Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. --> ## Further information I am the maintainer of the Drupal 7 module.
True
Contrib application: kiamlaluno (graphicsmagick module) - I would like to contribute a project. **(option 1) The name of your module, theme, or layout** GraphisMagick module ## (option 1) Please note these 3 requirements for new contrib projects: - [x] Include a README.md file containing license and maintainer information. You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md - [x] Include a LICENSE.txt file. You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt. - [x] If porting a Drupal 7 project, Maintain the Git history from Drupal. **Post a link to your new Backdrop project under your own GitHub account (option 1)** https://github.com/kiamlaluno/graphicsmagick **If you have chosen option 2 or 1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)?** Yes, I agree. <!-- (option 1) Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. --> <!-- (option 1) Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. --> ## Further information I am the maintainer of the Drupal 7 module.
main
contrib application kiamlaluno graphicsmagick module i would like to contribute a project option the name of your module theme or layout graphismagick module option please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example if porting a drupal project maintain the git history from drupal post a link to your new backdrop project under your own github account option if you have chosen option or above do you agree to the yes i agree further information i am the maintainer of the drupal module
1
48,289
7,403,735,958
IssuesEvent
2018-03-20 00:28:02
impulsesjs/impulses
https://api.github.com/repos/impulsesjs/impulses
opened
Create Queue documentation
documentation
As a contributor and developer, I need to have a clear understanding of why, how and what is the `Queue Class` Produce a document that we can export and use in the documentation site explaining the reason and how it is expected to work as well o what it accomplishes. It would be nice to have illustrations if possible (visual is always better and clear).
1.0
Create Queue documentation - As a contributor and developer, I need to have a clear understanding of why, how and what is the `Queue Class` Produce a document that we can export and use in the documentation site explaining the reason and how it is expected to work as well o what it accomplishes. It would be nice to have illustrations if possible (visual is always better and clear).
non_main
create queue documentation as a contributor and developer i need to have a clear understanding of why how and what is the queue class produce a document that we can export and use in the documentation site explaining the reason and how it is expected to work as well o what it accomplishes it would be nice to have illustrations if possible visual is always better and clear
0
688,856
23,597,765,070
IssuesEvent
2022-08-23 21:04:52
GoogleContainerTools/skaffold
https://api.github.com/repos/GoogleContainerTools/skaffold
closed
`skaffold.yaml` schema should accept annotations
priority/p2 area/diagnose kind/enhancement
<!-- Issues without logs and details are more complicated to fix. Please help us by filling the template below! --> ### Expected behavior I should be free to add annotations to the root metadata of a `skaffold.yaml` file. My intent is to annotate my skaffold file with `config.kubernetes.io/local-config: "true`. ### Actual behavior According the the KRM schema: `"Property metadata.annotations is not allowed"` ### Information - Skaffold version: v1.39.1 - Operating system: (`uname --all`): `Linux 🐀 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux` - Installed via: skaffold.dev - Contents of skaffold.yaml: ```yaml apiVersion: skaffold/v2beta29 kind: Config metadata: name: hula annotations: config.kubernetes.io/local-config: "true" build: {} deploy: {} ``` ### Steps to reproduce the behavior 1. Add metadata.annotations to any schema-validated `skaffold.yaml` file. 2. ` skaffold diagnose` > error parsing skaffold configuration file: unable to parse config: yaml: unmarshal errors: > line 31: field annotations not found in type latest.Metadata
1.0
`skaffold.yaml` schema should accept annotations - <!-- Issues without logs and details are more complicated to fix. Please help us by filling the template below! --> ### Expected behavior I should be free to add annotations to the root metadata of a `skaffold.yaml` file. My intent is to annotate my skaffold file with `config.kubernetes.io/local-config: "true`. ### Actual behavior According the the KRM schema: `"Property metadata.annotations is not allowed"` ### Information - Skaffold version: v1.39.1 - Operating system: (`uname --all`): `Linux 🐀 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux` - Installed via: skaffold.dev - Contents of skaffold.yaml: ```yaml apiVersion: skaffold/v2beta29 kind: Config metadata: name: hula annotations: config.kubernetes.io/local-config: "true" build: {} deploy: {} ``` ### Steps to reproduce the behavior 1. Add metadata.annotations to any schema-validated `skaffold.yaml` file. 2. ` skaffold diagnose` > error parsing skaffold configuration file: unable to parse config: yaml: unmarshal errors: > line 31: field annotations not found in type latest.Metadata
non_main
skaffold yaml schema should accept annotations issues without logs and details are more complicated to fix please help us by filling the template below expected behavior i should be free to add annotations to the root metadata of a skaffold yaml file my intent is to annotate my skaffold file with config kubernetes io local config true actual behavior according the the krm schema property metadata annotations is not allowed information skaffold version operating system uname all linux 🐀 microsoft standard smp wed mar utc gnu linux installed via skaffold dev contents of skaffold yaml yaml apiversion skaffold kind config metadata name hula annotations config kubernetes io local config true build deploy steps to reproduce the behavior add metadata annotations to any schema validated skaffold yaml file skaffold diagnose error parsing skaffold configuration file unable to parse config yaml unmarshal errors line field annotations not found in type latest metadata
0
1,339
5,721,485,036
IssuesEvent
2017-04-20 06:46:50
tomchentw/react-google-maps
https://api.github.com/repos/tomchentw/react-google-maps
closed
Cant drag or zoom map when directions are on
CALL_FOR_MAINTAINERS
I have the directions showing, they are working, but when you drag or zoom the map it defaults back to where the directions are. It is like the map is locked on the directions. GIF to show => http://g.recordit.co/R3sDpLyqB6.gif ``` <GoogleMapLoader containerElement={ <div {...this.props} style={{ height: '100%' }} /> } googleMapElement={ <GoogleMap ref='map' defaultZoom={14} defaultCenter={{lat: this.props.center[0], lng: this.props.center[1]}} onDragend={this.handleBoundsChange} onIdle={this.handleBoundsChange}> { this.props.directions ? <DirectionsRenderer directions={this.props.directions} /> : null } { (this.props.directions === null) && <MarkerClusterer averageCenter enableRetinaIcons styles={clusterStyles} gridSize= {25} > { this.props.locations.map((location, i) => { const geometry = location.address.geometry.location let icon = '/images/marker.png' if (location._id === this.state.hoverMarker || location._id === this.props.locationId) { icon = '/images/marker-hover.png' } return ( <Marker position={{ lat: geometry.lat, lng: geometry.lng }} onClick={this.props.openModal.bind(this, location._id, 'location')} onMouseover={this.markerHoverOn.bind(this, location._id)} onMouseout={this.markerHoverOn.bind(this, null)} icon={icon} key={i} /> ) })} </MarkerClusterer> || null } </GoogleMap> } /> ` ```
True
Cant drag or zoom map when directions are on - I have the directions showing, they are working, but when you drag or zoom the map it defaults back to where the directions are. It is like the map is locked on the directions. GIF to show => http://g.recordit.co/R3sDpLyqB6.gif ``` <GoogleMapLoader containerElement={ <div {...this.props} style={{ height: '100%' }} /> } googleMapElement={ <GoogleMap ref='map' defaultZoom={14} defaultCenter={{lat: this.props.center[0], lng: this.props.center[1]}} onDragend={this.handleBoundsChange} onIdle={this.handleBoundsChange}> { this.props.directions ? <DirectionsRenderer directions={this.props.directions} /> : null } { (this.props.directions === null) && <MarkerClusterer averageCenter enableRetinaIcons styles={clusterStyles} gridSize= {25} > { this.props.locations.map((location, i) => { const geometry = location.address.geometry.location let icon = '/images/marker.png' if (location._id === this.state.hoverMarker || location._id === this.props.locationId) { icon = '/images/marker-hover.png' } return ( <Marker position={{ lat: geometry.lat, lng: geometry.lng }} onClick={this.props.openModal.bind(this, location._id, 'location')} onMouseover={this.markerHoverOn.bind(this, location._id)} onMouseout={this.markerHoverOn.bind(this, null)} icon={icon} key={i} /> ) })} </MarkerClusterer> || null } </GoogleMap> } /> ` ```
main
cant drag or zoom map when directions are on i have the directions showing they are working but when you drag or zoom the map it defaults back to where the directions are it is like the map is locked on the directions gif to show googlemaploader containerelement div this props style height googlemapelement googlemap ref map defaultzoom defaultcenter lat this props center lng this props center ondragend this handleboundschange onidle this handleboundschange this props directions null this props directions null markerclusterer averagecenter enableretinaicons styles clusterstyles gridsize this props locations map location i const geometry location address geometry location let icon images marker png if location id this state hovermarker location id this props locationid icon images marker hover png return marker position lat geometry lat lng geometry lng onclick this props openmodal bind this location id location onmouseover this markerhoveron bind this location id onmouseout this markerhoveron bind this null icon icon key i null
1
40
2,587,882,312
IssuesEvent
2015-02-17 21:16:14
spyder-ide/spyder
https://api.github.com/repos/spyder-ide/spyder
closed
Setup issue autolinking from Bitbucket to Google Code
1 star bug done Easy imported Maintainability
_From [techtonik@gmail.com](https://code.google.com/u/techtonik@gmail.com/) on 2014-08-25T09:16:08Z_ What steps will reproduce the problem? Test like " issue `#1313` " on Bitbucket is not linked to Google Code tracker Carlos, you seem to be the only active admin, so can you add this? The process is described here - https://bitbucket.org/techtonik/scons/issue/3/setup-bitbucket-autolinking _Original issue: http://code.google.com/p/spyderlib/issues/detail?id=1944_
True
Setup issue autolinking from Bitbucket to Google Code - _From [techtonik@gmail.com](https://code.google.com/u/techtonik@gmail.com/) on 2014-08-25T09:16:08Z_ What steps will reproduce the problem? Test like " issue `#1313` " on Bitbucket is not linked to Google Code tracker Carlos, you seem to be the only active admin, so can you add this? The process is described here - https://bitbucket.org/techtonik/scons/issue/3/setup-bitbucket-autolinking _Original issue: http://code.google.com/p/spyderlib/issues/detail?id=1944_
main
setup issue autolinking from bitbucket to google code from on what steps will reproduce the problem test like issue on bitbucket is not linked to google code tracker carlos you seem to be the only active admin so can you add this the process is described here original issue
1
81,330
30,802,759,014
IssuesEvent
2023-08-01 03:45:23
idaholab/moose
https://api.github.com/repos/idaholab/moose
opened
Unhelpful "not present in InputParams" error message
T: defect P: normal
## Bug Description <!--A clear and concise description of the problem (Note: A missing feature is not a bug).--> I'm not even really sure what I did, but I'm getting ``` *** ERROR *** param 'boundaries' not present in InputParams ``` from meshing which looks like a problem happened when I removed `external_boundary_id` from a bunch of my `SimpleHexagonGenerator` blocks in the attached input [mhtgr_2d_v14.txt](https://github.com/idaholab/moose/files/12224368/mhtgr_2d_v14.txt) I would expect a normal error message of `mesh block [xyz] is missing a required parameter` ## Steps to Reproduce <!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)--> Run the attached input above ## Impact <!--Does this prevent you from getting your work done, or is it more of an annoyance?--> I can get back to something running by undoing whatever changes I made, but the error messages is not helpful.
1.0
Unhelpful "not present in InputParams" error message - ## Bug Description <!--A clear and concise description of the problem (Note: A missing feature is not a bug).--> I'm not even really sure what I did, but I'm getting ``` *** ERROR *** param 'boundaries' not present in InputParams ``` from meshing which looks like a problem happened when I removed `external_boundary_id` from a bunch of my `SimpleHexagonGenerator` blocks in the attached input [mhtgr_2d_v14.txt](https://github.com/idaholab/moose/files/12224368/mhtgr_2d_v14.txt) I would expect a normal error message of `mesh block [xyz] is missing a required parameter` ## Steps to Reproduce <!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)--> Run the attached input above ## Impact <!--Does this prevent you from getting your work done, or is it more of an annoyance?--> I can get back to something running by undoing whatever changes I made, but the error messages is not helpful.
non_main
unhelpful not present in inputparams error message bug description i m not even really sure what i did but i m getting error param boundaries not present in inputparams from meshing which looks like a problem happened when i removed external boundary id from a bunch of my simplehexagongenerator blocks in the attached input i would expect a normal error message of mesh block is missing a required parameter steps to reproduce run the attached input above impact i can get back to something running by undoing whatever changes i made but the error messages is not helpful
0
3,403
13,181,828,427
IssuesEvent
2020-08-12 14:53:18
duo-labs/cloudmapper
https://api.github.com/repos/duo-labs/cloudmapper
closed
Show service/port on map
map unmaintained_functionality
It would be nice if the graph would show service/port number. This would a great help for threat analysis.
True
Show service/port on map - It would be nice if the graph would show service/port number. This would a great help for threat analysis.
main
show service port on map it would be nice if the graph would show service port number this would a great help for threat analysis
1
3,298
12,695,197,607
IssuesEvent
2020-06-22 08:00:53
short-d/short
https://api.github.com/repos/short-d/short
opened
[Refactor] Reorganize the routing package
Go maintainability refactor
**What is frustrating you?** The [routing package](https://github.com/short-d/short/tree/master/backend/app/adapter/routing) has become a bit confusing as it has a lot of information and it needs reorganization. **Your solution** Reorganize the package! Also, the routing package should respect the single-responsibility principle, therefore, the handles should be a separate package.
True
[Refactor] Reorganize the routing package - **What is frustrating you?** The [routing package](https://github.com/short-d/short/tree/master/backend/app/adapter/routing) has become a bit confusing as it has a lot of information and it needs reorganization. **Your solution** Reorganize the package! Also, the routing package should respect the single-responsibility principle, therefore, the handles should be a separate package.
main
reorganize the routing package what is frustrating you the has become a bit confusing as it has a lot of information and it needs reorganization your solution reorganize the package also the routing package should respect the single responsibility principle therefore the handles should be a separate package
1
214,095
24,040,569,726
IssuesEvent
2022-09-16 01:02:07
jgithaiga/jgithaiga.github.io
https://api.github.com/repos/jgithaiga/jgithaiga.github.io
opened
CVE-2022-3224 (High) detected in parse-url-6.0.0.tgz
security vulnerability
## CVE-2022-3224 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-6.0.0.tgz</b></p></summary> <p>An advanced url parser supporting git urls too.</p> <p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz">https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/parse-url/package.json</p> <p> Dependency Hierarchy: - gatsby-plugin-sharp-4.10.0.tgz (Root Library) - gatsby-telemetry-3.10.0.tgz - git-up-4.0.5.tgz - :x: **parse-url-6.0.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Misinterpretation of Input in GitHub repository ionicabizau/parse-url prior to 8.1.0. <p>Publish Date: 2022-09-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3224>CVE-2022-3224</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3224">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3224</a></p> <p>Release Date: 2022-09-15</p> <p>Fix Resolution: parse-url - 8.1.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-3224 (High) detected in parse-url-6.0.0.tgz - ## CVE-2022-3224 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-6.0.0.tgz</b></p></summary> <p>An advanced url parser supporting git urls too.</p> <p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz">https://registry.npmjs.org/parse-url/-/parse-url-6.0.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/parse-url/package.json</p> <p> Dependency Hierarchy: - gatsby-plugin-sharp-4.10.0.tgz (Root Library) - gatsby-telemetry-3.10.0.tgz - git-up-4.0.5.tgz - :x: **parse-url-6.0.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Misinterpretation of Input in GitHub repository ionicabizau/parse-url prior to 8.1.0. <p>Publish Date: 2022-09-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3224>CVE-2022-3224</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3224">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3224</a></p> <p>Release Date: 2022-09-15</p> <p>Fix Resolution: parse-url - 8.1.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in parse url tgz cve high severity vulnerability vulnerable library parse url tgz an advanced url parser supporting git urls too library home page a href path to dependency file package json path to vulnerable library node modules parse url package json dependency hierarchy gatsby plugin sharp tgz root library gatsby telemetry tgz git up tgz x parse url tgz vulnerable library found in base branch master vulnerability details misinterpretation of input in github repository ionicabizau parse url prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse url step up your open source security game with mend
0
344,801
24,828,153,201
IssuesEvent
2022-10-25 23:08:10
FuelLabs/fuel-ui
https://api.github.com/repos/FuelLabs/fuel-ui
closed
Update Radix Icons to Phosphor Icons in docs
documentation enhancement good first issue
### Motivation The docs still link Radix Icons which is incorrect. This causes confusion when trying to use icons within the design system. ### Usage example _No response_ ### Possible implementations _No response_
1.0
Update Radix Icons to Phosphor Icons in docs - ### Motivation The docs still link Radix Icons which is incorrect. This causes confusion when trying to use icons within the design system. ### Usage example _No response_ ### Possible implementations _No response_
non_main
update radix icons to phosphor icons in docs motivation the docs still link radix icons which is incorrect this causes confusion when trying to use icons within the design system usage example no response possible implementations no response
0
1,167
5,079,125,350
IssuesEvent
2016-12-28 18:29:06
backdrop-ops/contrib
https://api.github.com/repos/backdrop-ops/contrib
closed
Backdrop Contributed Project Group Application - XML Sitemap
Maintainer application
I would like to join the Backdrop contrib group via a port of the Drupal XML sitemap module: [https://github.com/alexfinnarn/backdrop-xml-sitemap](https://github.com/alexfinnarn/backdrop-xml-sitemap) I think the following check boxes have all been accounted for: - [x] Include a LICENSE.txt file that indicates the code is GPL v2. You can use this copy. - [x] Include a README.md file that includes license and maintainer information. You can use this example. - [x] Maintain the Git history from Drupal 7. See this article. I have also read and agreed to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement) while taking off my cap and placing my hand over my heart. For the good of the land, can I join your group?
True
Backdrop Contributed Project Group Application - XML Sitemap - I would like to join the Backdrop contrib group via a port of the Drupal XML sitemap module: [https://github.com/alexfinnarn/backdrop-xml-sitemap](https://github.com/alexfinnarn/backdrop-xml-sitemap) I think the following check boxes have all been accounted for: - [x] Include a LICENSE.txt file that indicates the code is GPL v2. You can use this copy. - [x] Include a README.md file that includes license and maintainer information. You can use this example. - [x] Maintain the Git history from Drupal 7. See this article. I have also read and agreed to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement) while taking off my cap and placing my hand over my heart. For the good of the land, can I join your group?
main
backdrop contributed project group application xml sitemap i would like to join the backdrop contrib group via a port of the drupal xml sitemap module i think the following check boxes have all been accounted for include a license txt file that indicates the code is gpl you can use this copy include a readme md file that includes license and maintainer information you can use this example maintain the git history from drupal see this article i have also read and agreed to the while taking off my cap and placing my hand over my heart for the good of the land can i join your group
1
5,051
25,883,173,339
IssuesEvent
2022-12-14 12:46:14
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
opened
Improve handling of server errors when loading the Record Page
type: bug work: frontend status: ready restricted: maintainers
rd Page has a few places where we make API requests and don't adequately handle server errors: - [Here](https://github.com/centerofci/mathesar/blob/e20fc43314684764585a8bc0bb2bbbc73a7a23f2/mathesar_ui/src/routes/RecordPageRoute.svelte#L13) ```ts $: record = new RecordStore({ table, recordId }); ``` The `RecordStore` constructor calls `RecordStore.fetch` which fetches the record data. If this fails, then the page loads, giving the user the impression that all fields in the record are blank. - [Here](https://github.com/centerofci/mathesar/blob/e20fc43314684764585a8bc0bb2bbbc73a7a23f2/mathesar_ui/src/pages/record/RecordPage.svelte#L16) ```ts $: tableStructure = new TableStructure({ id: table.id, abstractTypesMap: $currentDbAbstractTypes.data, }); ``` The `TableStructure` constructor instantiates other stores which fetch data within their constructors. - [Here](https://github.com/centerofci/mathesar/blob/e20fc43314684764585a8bc0bb2bbbc73a7a23f2/mathesar_ui/src/pages/record/RecordPageContent.svelte#L83) ```svelte {#await getJoinableTablesResult(table.id)} <RecordPageLoadingSpinner /> {:then joinableTablesResult} <Widgets {joinableTablesResult} {recordId} recordSummary={$summary} /> {/await} ``` The `getJoinableTablesResult` function might throw an error. We should handle this. We should also eliminate code duplication with the `getJoinableTablesResult` function in `src/stores/tables.ts`
True
Improve handling of server errors when loading the Record Page - rd Page has a few places where we make API requests and don't adequately handle server errors: - [Here](https://github.com/centerofci/mathesar/blob/e20fc43314684764585a8bc0bb2bbbc73a7a23f2/mathesar_ui/src/routes/RecordPageRoute.svelte#L13) ```ts $: record = new RecordStore({ table, recordId }); ``` The `RecordStore` constructor calls `RecordStore.fetch` which fetches the record data. If this fails, then the page loads, giving the user the impression that all fields in the record are blank. - [Here](https://github.com/centerofci/mathesar/blob/e20fc43314684764585a8bc0bb2bbbc73a7a23f2/mathesar_ui/src/pages/record/RecordPage.svelte#L16) ```ts $: tableStructure = new TableStructure({ id: table.id, abstractTypesMap: $currentDbAbstractTypes.data, }); ``` The `TableStructure` constructor instantiates other stores which fetch data within their constructors. - [Here](https://github.com/centerofci/mathesar/blob/e20fc43314684764585a8bc0bb2bbbc73a7a23f2/mathesar_ui/src/pages/record/RecordPageContent.svelte#L83) ```svelte {#await getJoinableTablesResult(table.id)} <RecordPageLoadingSpinner /> {:then joinableTablesResult} <Widgets {joinableTablesResult} {recordId} recordSummary={$summary} /> {/await} ``` The `getJoinableTablesResult` function might throw an error. We should handle this. We should also eliminate code duplication with the `getJoinableTablesResult` function in `src/stores/tables.ts`
main
improve handling of server errors when loading the record page rd page has a few places where we make api requests and don t adequately handle server errors ts record new recordstore table recordid the recordstore constructor calls recordstore fetch which fetches the record data if this fails then the page loads giving the user the impression that all fields in the record are blank ts tablestructure new tablestructure id table id abstracttypesmap currentdbabstracttypes data the tablestructure constructor instantiates other stores which fetch data within their constructors svelte await getjoinabletablesresult table id then joinabletablesresult await the getjoinabletablesresult function might throw an error we should handle this we should also eliminate code duplication with the getjoinabletablesresult function in src stores tables ts
1
61,175
8,493,923,754
IssuesEvent
2018-10-28 16:47:21
wework/speccy
https://api.github.com/repos/wework/speccy
closed
Site does not include path-keys-no-trailing-slash
documentation
<!-- Provide a general summary of the issue in the Title above --> ## Detailed description Speccy is detecting a path key with a trailing slash. It is linking to https://speccy.io/rules/#path-keys-no-trailing-slash which does not exist.
1.0
Site does not include path-keys-no-trailing-slash - <!-- Provide a general summary of the issue in the Title above --> ## Detailed description Speccy is detecting a path key with a trailing slash. It is linking to https://speccy.io/rules/#path-keys-no-trailing-slash which does not exist.
non_main
site does not include path keys no trailing slash detailed description speccy is detecting a path key with a trailing slash it is linking to which does not exist
0
636
4,152,410,986
IssuesEvent
2016-06-16 00:49:48
duckduckgo/zeroclickinfo-spice
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
closed
Tides: Needs more triggers
Maintainer Input Requested PR Received
This should trigger for things like "tides for 90210" and "tide in 90210", and perhaps "tide times in 90210". ------ IA Page: http://duck.co/ia/view/tides [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mattr555
True
Tides: Needs more triggers - This should trigger for things like "tides for 90210" and "tide in 90210", and perhaps "tide times in 90210". ------ IA Page: http://duck.co/ia/view/tides [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mattr555
main
tides needs more triggers this should trigger for things like tides for and tide in and perhaps tide times in ia page
1
1,087
4,934,496,947
IssuesEvent
2016-11-28 19:15:13
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
The module service will fail in check mode if the target service is not yet installed on server.
affects_2.2 bug_report in progress waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible module service ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no settings in ansible.cfg ##### OS / ENVIRONMENT Debian/wheezy ##### SUMMARY The module service will fail in check mode if the target service is not yet installed on server. For instance my playbook installs apache2 and, later restart apache2. Running ansible-playbook --check on a server without apache2 will fail. This may be a duplicate but I did not find any issue on this topic. ##### STEPS TO REPRODUCE This Dockerfile will reproduce this issue: https://github.com/pgrange/ansible_service_check_mode_issue Running this playbook in check mode on a brand new server will reproduce this issue: ``` - hosts: all tasks: - apt: name=apache2 - name: no need but I would like to restart apache service: name=apache2 state=restarted ``` ##### EXPECTED RESULTS ansible-playbook --check should not fail if a service to restart is not already installed on server. Why not raise a warning that we are trying to restart an unknown service but that's it. ##### ACTUAL RESULTS What actually happens is that running ansible-playbook in check mode fails: ``` PLAYBOOK: apache.yml *********************************************************** 1 plays in /tmp/ansible/apache.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849 `" && echo ansible-tmp-1479828044.78-237878029370849="` echo $HOME/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849 `" ) && sleep 0' <localhost> PUT /tmp/tmpbW6GVG TO /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py <localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/ /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [apt] ********************************************************************* task path: /tmp/ansible/apache.yml:4 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148 `" && echo ansible-tmp-1479828045.04-80599335973148="` echo $HOME/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148 `" ) && sleep 0' <localhost> PUT /tmp/tmpBgPUnm TO /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py <localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/ /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => { "cache_update_time": 1479825681, "cache_updated": false, "changed": true, "diff": {}, "invocation": { "module_args": { "allow_unauthenticated": false, "autoremove": false, "cache_valid_time": 0, "deb": null, "default_release": null, "dpkg_options": "force-confdef,force-confold", "force": false, "install_recommends": null, "name": "apache2", "only_upgrade": false, "package": [ "apache2" ], "purge": false, "state": "present", "update_cache": false, "upgrade": null }, "module_name": "apt" }, "stderr": "", "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1\n libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2 libpcre3\n libprocps0 procps psmisc ssl-cert\nSuggested packages:\n www-browser apache2-doc apache2-suexec apache2-suexec-custom\n openssl-blacklist\nThe following NEW packages will be installed:\n apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common\n libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2\n libpcre3 libprocps0 procps psmisc ssl-cert\n0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.\nInst libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nInst libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])\nInst libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])\nInst procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nInst libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])\nInst libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])\nInst ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])\nConf libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nConf libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])\nConf libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])\nConf procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nConf libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])\nConf libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])\nConf ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])\n", "stdout_lines": [ "Reading package lists...", "Building dependency tree...", "Reading state information...", "The following extra packages will be installed:", " apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1", " libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2 libpcre3", " libprocps0 procps psmisc ssl-cert", "Suggested packages:", " www-browser apache2-doc apache2-suexec apache2-suexec-custom", " openssl-blacklist", "The following NEW packages will be installed:", " apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common", " libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2", " libpcre3 libprocps0 procps psmisc ssl-cert", "0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.", "Inst libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])", "Inst libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])", "Inst libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])", "Inst procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])", "Inst libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])", "Inst libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])", "Inst libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])", "Inst libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])", "Inst apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])", "Inst ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])", "Conf libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])", "Conf libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])", "Conf libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])", "Conf procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])", "Conf libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])", "Conf libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])", "Conf libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])", "Conf libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])", "Conf apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])", "Conf ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])" ] } TASK [no need but I would like to restart apache] ****************************** task path: /tmp/ansible/apache.yml:5 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/service.py <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803 `" && echo ansible-tmp-1479828046.91-253442262595803="` echo $HOME/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803 `" ) && sleep 0' <localhost> PUT /tmp/tmpu0znF1 TO /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py <localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/ /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "arguments": "", "enabled": null, "name": "apache2", "pattern": null, "runlevel": "default", "sleep": null, "state": "restarted" } }, "msg": "no service or tool found for: apache2" } to retry, use: --limit @/tmp/ansible/apache.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=1 The command '/bin/sh -c ansible-playbook -vvv -i localhost, -c local /tmp/ansible/apache.yml --check' returned a non-zero code: 2 ```
True
The module service will fail in check mode if the target service is not yet installed on server. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible module service ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION no settings in ansible.cfg ##### OS / ENVIRONMENT Debian/wheezy ##### SUMMARY The module service will fail in check mode if the target service is not yet installed on server. For instance my playbook installs apache2 and, later restart apache2. Running ansible-playbook --check on a server without apache2 will fail. This may be a duplicate but I did not find any issue on this topic. ##### STEPS TO REPRODUCE This Dockerfile will reproduce this issue: https://github.com/pgrange/ansible_service_check_mode_issue Running this playbook in check mode on a brand new server will reproduce this issue: ``` - hosts: all tasks: - apt: name=apache2 - name: no need but I would like to restart apache service: name=apache2 state=restarted ``` ##### EXPECTED RESULTS ansible-playbook --check should not fail if a service to restart is not already installed on server. Why not raise a warning that we are trying to restart an unknown service but that's it. ##### ACTUAL RESULTS What actually happens is that running ansible-playbook in check mode fails: ``` PLAYBOOK: apache.yml *********************************************************** 1 plays in /tmp/ansible/apache.yml PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849 `" && echo ansible-tmp-1479828044.78-237878029370849="` echo $HOME/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849 `" ) && sleep 0' <localhost> PUT /tmp/tmpbW6GVG TO /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py <localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/ /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/setup.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1479828044.78-237878029370849/" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [apt] ********************************************************************* task path: /tmp/ansible/apache.yml:4 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/packaging/os/apt.py <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148 `" && echo ansible-tmp-1479828045.04-80599335973148="` echo $HOME/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148 `" ) && sleep 0' <localhost> PUT /tmp/tmpBgPUnm TO /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py <localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/ /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/apt.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1479828045.04-80599335973148/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => { "cache_update_time": 1479825681, "cache_updated": false, "changed": true, "diff": {}, "invocation": { "module_args": { "allow_unauthenticated": false, "autoremove": false, "cache_valid_time": 0, "deb": null, "default_release": null, "dpkg_options": "force-confdef,force-confold", "force": false, "install_recommends": null, "name": "apache2", "only_upgrade": false, "package": [ "apache2" ], "purge": false, "state": "present", "update_cache": false, "upgrade": null }, "module_name": "apt" }, "stderr": "", "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1\n libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2 libpcre3\n libprocps0 procps psmisc ssl-cert\nSuggested packages:\n www-browser apache2-doc apache2-suexec apache2-suexec-custom\n openssl-blacklist\nThe following NEW packages will be installed:\n apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common\n libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2\n libpcre3 libprocps0 procps psmisc ssl-cert\n0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.\nInst libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nInst libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])\nInst libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])\nInst procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nInst libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])\nInst libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])\nInst apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nInst psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])\nInst ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])\nConf libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nConf libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])\nConf libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])\nConf procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])\nConf libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])\nConf libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])\nConf apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])\nConf psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])\nConf ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])\n", "stdout_lines": [ "Reading package lists...", "Building dependency tree...", "Reading state information...", "The following extra packages will be installed:", " apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1", " libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2 libpcre3", " libprocps0 procps psmisc ssl-cert", "Suggested packages:", " www-browser apache2-doc apache2-suexec apache2-suexec-custom", " openssl-blacklist", "The following NEW packages will be installed:", " apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common", " libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libcap2", " libpcre3 libprocps0 procps psmisc ssl-cert", "0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.", "Inst libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])", "Inst libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])", "Inst libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])", "Inst procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])", "Inst libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])", "Inst libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])", "Inst libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])", "Inst libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])", "Inst apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Inst psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])", "Inst ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])", "Conf libprocps0 (1:3.3.3-3 Debian:7.11/oldstable [amd64])", "Conf libcap2 (1:2.22-1.2 Debian:7.11/oldstable [amd64])", "Conf libpcre3 (1:8.30-5 Debian:7.11/oldstable [amd64])", "Conf procps (1:3.3.3-3 Debian:7.11/oldstable [amd64])", "Conf libapr1 (1.4.6-3+deb7u1 Debian:7.11/oldstable [amd64])", "Conf libaprutil1 (1.4.1-3 Debian:7.11/oldstable [amd64])", "Conf libaprutil1-dbd-sqlite3 (1.4.1-3 Debian:7.11/oldstable [amd64])", "Conf libaprutil1-ldap (1.4.1-3 Debian:7.11/oldstable [amd64])", "Conf apache2.2-bin (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf apache2-utils (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf apache2.2-common (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf apache2-mpm-worker (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf apache2 (2.2.22-13+deb7u7 Debian-Security:7.0/oldstable [amd64])", "Conf psmisc (22.19-1+deb7u1 Debian:7.11/oldstable [amd64])", "Conf ssl-cert (1.0.32+deb7u1 Debian:7.11/oldstable [all])" ] } TASK [no need but I would like to restart apache] ****************************** task path: /tmp/ansible/apache.yml:5 Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/core/system/service.py <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803 `" && echo ansible-tmp-1479828046.91-253442262595803="` echo $HOME/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803 `" ) && sleep 0' <localhost> PUT /tmp/tmpu0znF1 TO /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py <localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/ /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/service.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1479828046.91-253442262595803/" > /dev/null 2>&1 && sleep 0' fatal: [localhost]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "arguments": "", "enabled": null, "name": "apache2", "pattern": null, "runlevel": "default", "sleep": null, "state": "restarted" } }, "msg": "no service or tool found for: apache2" } to retry, use: --limit @/tmp/ansible/apache.retry PLAY RECAP ********************************************************************* localhost : ok=2 changed=1 unreachable=0 failed=1 The command '/bin/sh -c ansible-playbook -vvv -i localhost, -c local /tmp/ansible/apache.yml --check' returned a non-zero code: 2 ```
main
the module service will fail in check mode if the target service is not yet installed on server issue type bug report component name ansible module service ansible version ansible config file configured module search path default w o overrides configuration no settings in ansible cfg os environment debian wheezy summary the module service will fail in check mode if the target service is not yet installed on server for instance my playbook installs and later restart running ansible playbook check on a server without will fail this may be a duplicate but i did not find any issue on this topic steps to reproduce this dockerfile will reproduce this issue running this playbook in check mode on a brand new server will reproduce this issue hosts all tasks apt name name no need but i would like to restart apache service name state restarted expected results ansible playbook check should not fail if a service to restart is not already installed on server why not raise a warning that we are trying to restart an unknown service but that s it actual results what actually happens is that running ansible playbook in check mode fails playbook apache yml plays in tmp ansible apache yml play task using module file usr local lib dist packages ansible modules core system setup py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp setup py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp setup py sleep exec bin sh c usr bin python root ansible tmp ansible tmp setup py rm rf root ansible tmp ansible tmp dev null sleep ok task task path tmp ansible apache yml using module file usr local lib dist packages ansible modules core packaging os apt py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpbgpunm to root ansible tmp ansible tmp apt py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp apt py sleep exec bin sh c usr bin python root ansible tmp ansible tmp apt py rm rf root ansible tmp ansible tmp dev null sleep changed cache update time cache updated false changed true diff invocation module args allow unauthenticated false autoremove false cache valid time deb null default release null dpkg options force confdef force confold force false install recommends null name only upgrade false package purge false state present update cache false upgrade null module name apt stderr stdout reading package lists nbuilding dependency tree nreading state information nthe following extra packages will be installed n mpm worker utils bin common n dbd ldap n procps psmisc ssl cert nsuggested packages n www browser doc suexec suexec custom n openssl blacklist nthe following new packages will be installed n mpm worker utils bin common n dbd ldap n procps psmisc ssl cert upgraded newly installed to remove and not upgraded ninst debian oldstable ninst debian oldstable ninst debian oldstable ninst procps debian oldstable ninst debian oldstable ninst debian oldstable ninst dbd debian oldstable ninst ldap debian oldstable ninst bin debian security oldstable ninst utils debian security oldstable ninst common debian security oldstable ninst mpm worker debian security oldstable ninst debian security oldstable ninst psmisc debian oldstable ninst ssl cert debian oldstable nconf debian oldstable nconf debian oldstable nconf debian oldstable nconf procps debian oldstable nconf debian oldstable nconf debian oldstable nconf dbd debian oldstable nconf ldap debian oldstable nconf bin debian security oldstable nconf utils debian security oldstable nconf common debian security oldstable nconf mpm worker debian security oldstable nconf debian security oldstable nconf psmisc debian oldstable nconf ssl cert debian oldstable n stdout lines reading package lists building dependency tree reading state information the following extra packages will be installed mpm worker utils bin common dbd ldap procps psmisc ssl cert suggested packages www browser doc suexec suexec custom openssl blacklist the following new packages will be installed mpm worker utils bin common dbd ldap procps psmisc ssl cert upgraded newly installed to remove and not upgraded inst debian oldstable inst debian oldstable inst debian oldstable inst procps debian oldstable inst debian oldstable inst debian oldstable inst dbd debian oldstable inst ldap debian oldstable inst bin debian security oldstable inst utils debian security oldstable inst common debian security oldstable inst mpm worker debian security oldstable inst debian security oldstable inst psmisc debian oldstable inst ssl cert debian oldstable conf debian oldstable conf debian oldstable conf debian oldstable conf procps debian oldstable conf debian oldstable conf debian oldstable conf dbd debian oldstable conf ldap debian oldstable conf bin debian security oldstable conf utils debian security oldstable conf common debian security oldstable conf mpm worker debian security oldstable conf debian security oldstable conf psmisc debian oldstable conf ssl cert debian oldstable task task path tmp ansible apache yml using module file usr local lib dist packages ansible modules core system service py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp service py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp service py sleep exec bin sh c usr bin python root ansible tmp ansible tmp service py rm rf root ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args arguments enabled null name pattern null runlevel default sleep null state restarted msg no service or tool found for to retry use limit tmp ansible apache retry play recap localhost ok changed unreachable failed the command bin sh c ansible playbook vvv i localhost c local tmp ansible apache yml check returned a non zero code
1
23,338
4,931,554,625
IssuesEvent
2016-11-28 10:34:34
frappe/erpnext
https://api.github.com/repos/frappe/erpnext
closed
Opportunity
documentation
- [ ] Explain the use of an opportunity with an example - [ ] How to make supplier quotation from an opportunity and use case of the feature.
1.0
Opportunity - - [ ] Explain the use of an opportunity with an example - [ ] How to make supplier quotation from an opportunity and use case of the feature.
non_main
opportunity explain the use of an opportunity with an example how to make supplier quotation from an opportunity and use case of the feature
0
3,206
12,236,610,576
IssuesEvent
2020-05-04 16:37:40
RockefellerArchiveCenter/aurora
https://api.github.com/repos/RockefellerArchiveCenter/aurora
closed
Add health check endpoint
maintainability
## Is your feature request related to a problem? Please describe. In order to check the status of Aurora, we need an endpoint we can hit to get current status of databases and code. ## Describe the solution you'd like Implement a health check endpoint using patterns similar to the microservice applications.
True
Add health check endpoint - ## Is your feature request related to a problem? Please describe. In order to check the status of Aurora, we need an endpoint we can hit to get current status of databases and code. ## Describe the solution you'd like Implement a health check endpoint using patterns similar to the microservice applications.
main
add health check endpoint is your feature request related to a problem please describe in order to check the status of aurora we need an endpoint we can hit to get current status of databases and code describe the solution you d like implement a health check endpoint using patterns similar to the microservice applications
1
269,809
20,425,966,481
IssuesEvent
2022-02-24 03:56:30
saraoros/jest-another-RPG
https://api.github.com/repos/saraoros/jest-another-RPG
opened
Create a Game object, which will be responsible for the game logic. It will be used to keep track of whose turn it is, prompt the user for input, and check to see if the game has been won.
documentation
- All game logic is encompassed by a `Game()` constructor function. - A `Game` object has the following properties: - `roundNumber` - `isPlayerTurn` - `enemies` - `currentEnemy` - `player` - **A `Game` object has the following methods:** - `initializeGame()` - `battle()` - `checkEndOfBattle()` - `startNewBattle()`
1.0
Create a Game object, which will be responsible for the game logic. It will be used to keep track of whose turn it is, prompt the user for input, and check to see if the game has been won. - - All game logic is encompassed by a `Game()` constructor function. - A `Game` object has the following properties: - `roundNumber` - `isPlayerTurn` - `enemies` - `currentEnemy` - `player` - **A `Game` object has the following methods:** - `initializeGame()` - `battle()` - `checkEndOfBattle()` - `startNewBattle()`
non_main
create a game object which will be responsible for the game logic it will be used to keep track of whose turn it is prompt the user for input and check to see if the game has been won all game logic is encompassed by a game constructor function a game object has the following properties roundnumber isplayerturn enemies currentenemy player a game object has the following methods initializegame battle checkendofbattle startnewbattle
0
79,075
22,608,816,303
IssuesEvent
2022-06-29 15:20:50
hashicorp/packer-plugin-vsphere
https://api.github.com/repos/hashicorp/packer-plugin-vsphere
closed
PACKER_HTTP_ADDR variable not being set correctly
bug builder/vsphere track-internal
_This issue was originally opened by @DarrenF-G as hashicorp/packer#9973. It was migrated here as a result of the [Packer plugin split](https://github.com/hashicorp/packer/issues/8610#issuecomment-770034737). The original body of the issue is below._ <hr> Issue; While trying to use the http_directory directive, packer binds to the correct local interface/address and can be accessed by both Host and guest as expected, however the $env:packer_http_address variable does not get set to the same bind address. It will either get set to an apipa address or pick up an address from a random interface. Packer: 1.6.2(tried 1.60 with same result) Builder: vsphere-iso Host: Wndow 2004 (19041.508) vCenter: 6.7.0.10000 esxi: 6.7.0, 15018017 Interfaces present - https://gist.github.com/DarrenF-G/8b5c6d010b066bbd79d8460410adb071 Packer log for windows 10 build - https://gist.github.com/DarrenF-G/959bfd8fb7dcb0192d5092d609a4dbba Packer log for Ubuntu build - https://gist.github.com/DarrenF-G/0b931613860185e43662db2dcaec8f5c Example json - https://gist.github.com/DarrenF-G/17753bd304235539877c1c423d9cbaad if I step though the windows post-deployment and and run cat C:\Windows\Temp\packer-ps-env-vars-5f6341e7-d0d7-b303-d15a-531e95ab11a3.ps1 I get; $env:PACKER_BUILDER_TYPE="vsphere-iso"; $env:PACKER_BUILD_NAME="vsphere-iso"; $env:PACKER_HTTP_ADDR="172.22.32.1:8067"; $env:PACKER_HTTP_IP="172.22.32.1"; $env:PACKER_HTTP_PORT="8067"; as no http server is binding on 172.22.32.1 i am unable to pull down files using this variable. Tried to replicate it on another windows host with only one interface and get the same result, it sets the address as 169.* instead of my local IP. I also tried an ubuntu image too as it uses http.ip and http.port but it also sets the variables to a 169.* address when it should set it to a 10.* address(see the above logs) Let me know if you need any further information
1.0
PACKER_HTTP_ADDR variable not being set correctly - _This issue was originally opened by @DarrenF-G as hashicorp/packer#9973. It was migrated here as a result of the [Packer plugin split](https://github.com/hashicorp/packer/issues/8610#issuecomment-770034737). The original body of the issue is below._ <hr> Issue; While trying to use the http_directory directive, packer binds to the correct local interface/address and can be accessed by both Host and guest as expected, however the $env:packer_http_address variable does not get set to the same bind address. It will either get set to an apipa address or pick up an address from a random interface. Packer: 1.6.2(tried 1.60 with same result) Builder: vsphere-iso Host: Wndow 2004 (19041.508) vCenter: 6.7.0.10000 esxi: 6.7.0, 15018017 Interfaces present - https://gist.github.com/DarrenF-G/8b5c6d010b066bbd79d8460410adb071 Packer log for windows 10 build - https://gist.github.com/DarrenF-G/959bfd8fb7dcb0192d5092d609a4dbba Packer log for Ubuntu build - https://gist.github.com/DarrenF-G/0b931613860185e43662db2dcaec8f5c Example json - https://gist.github.com/DarrenF-G/17753bd304235539877c1c423d9cbaad if I step though the windows post-deployment and and run cat C:\Windows\Temp\packer-ps-env-vars-5f6341e7-d0d7-b303-d15a-531e95ab11a3.ps1 I get; $env:PACKER_BUILDER_TYPE="vsphere-iso"; $env:PACKER_BUILD_NAME="vsphere-iso"; $env:PACKER_HTTP_ADDR="172.22.32.1:8067"; $env:PACKER_HTTP_IP="172.22.32.1"; $env:PACKER_HTTP_PORT="8067"; as no http server is binding on 172.22.32.1 i am unable to pull down files using this variable. Tried to replicate it on another windows host with only one interface and get the same result, it sets the address as 169.* instead of my local IP. I also tried an ubuntu image too as it uses http.ip and http.port but it also sets the variables to a 169.* address when it should set it to a 10.* address(see the above logs) Let me know if you need any further information
non_main
packer http addr variable not being set correctly this issue was originally opened by darrenf g as hashicorp packer it was migrated here as a result of the the original body of the issue is below issue while trying to use the http directory directive packer binds to the correct local interface address and can be accessed by both host and guest as expected however the env packer http address variable does not get set to the same bind address it will either get set to an apipa address or pick up an address from a random interface packer tried with same result builder vsphere iso host wndow vcenter esxi interfaces present packer log for windows build packer log for ubuntu build example json if i step though the windows post deployment and and run cat c windows temp packer ps env vars i get env packer builder type vsphere iso env packer build name vsphere iso env packer http addr env packer http ip env packer http port as no http server is binding on i am unable to pull down files using this variable tried to replicate it on another windows host with only one interface and get the same result it sets the address as instead of my local ip i also tried an ubuntu image too as it uses http ip and http port but it also sets the variables to a address when it should set it to a address see the above logs let me know if you need any further information
0
1,338
2,946,622,415
IssuesEvent
2015-07-04 04:25:47
rust-js/rjs
https://api.github.com/repos/rust-js/rjs
opened
Validate performance of local management in the parser
A-performance
The parser currently uses an array to manage locals. For a small number of locals, this works far better than using a map. This choice needs to be validated though.
True
Validate performance of local management in the parser - The parser currently uses an array to manage locals. For a small number of locals, this works far better than using a map. This choice needs to be validated though.
non_main
validate performance of local management in the parser the parser currently uses an array to manage locals for a small number of locals this works far better than using a map this choice needs to be validated though
0
33
2,566,190,283
IssuesEvent
2015-02-08 07:48:58
retailcoder/Rubberduck
https://api.github.com/repos/retailcoder/Rubberduck
closed
Separate multiple declarations into multiple instructions
CondeInspectionType - Maintainability/Readability Issues
[Suggestion] This: Dim foo As String, bar As Integer, baz As Long Can be written as: Dim foo As String Dim bar As Integer Dim baz As Long
True
Separate multiple declarations into multiple instructions - [Suggestion] This: Dim foo As String, bar As Integer, baz As Long Can be written as: Dim foo As String Dim bar As Integer Dim baz As Long
main
separate multiple declarations into multiple instructions this dim foo as string bar as integer baz as long can be written as dim foo as string dim bar as integer dim baz as long
1
3,714
15,283,053,832
IssuesEvent
2021-02-23 10:21:34
diofant/diofant
https://api.github.com/repos/diofant/diofant
closed
Is it better to add typing hint?
enhancement help wanted maintainability
I notice that there aren't much typing hints, and since `diofant` requires 3.7 or higher, so add typing maybe better?
True
Is it better to add typing hint? - I notice that there aren't much typing hints, and since `diofant` requires 3.7 or higher, so add typing maybe better?
main
is it better to add typing hint i notice that there aren t much typing hints and since diofant requires or higher so add typing maybe better
1
1,316
5,639,943,133
IssuesEvent
2017-04-06 15:21:45
github/hubot-scripts
https://api.github.com/repos/github/hubot-scripts
closed
Conversation seems not working
needs-maintainer
Hey, I've installed the conversation script and i think something is broken. I don't really know if this script is making hubot more chatty but i don't know how to test it.
True
Conversation seems not working - Hey, I've installed the conversation script and i think something is broken. I don't really know if this script is making hubot more chatty but i don't know how to test it.
main
conversation seems not working hey i ve installed the conversation script and i think something is broken i don t really know if this script is making hubot more chatty but i don t know how to test it
1
2,766
8,306,973,822
IssuesEvent
2018-09-23 01:57:56
MovingBlocks/Terasology
https://api.github.com/repos/MovingBlocks/Terasology
opened
Recover and finish parked serialization / type handling overhaul
Api Architecture Epic
See #3489 for background information. @eviltak is behind the parked code related to this and may be interested in continuing with it, but is low on time as of this writing. If anybody else is hugely interested in serialization and type handling feel free to look at this but be warned that it is a large piece of work and a complex topic :-) Goal for this issue is to work with branch https://github.com/MovingBlocks/Terasology/tree/newSerialization to finish stabilizing it so it can be re-merged As part of that work several modules need syntax changes to compile again, as was the case when originally merging #3449. The same modules and likely more beyond that will need changes after #3456 is considered as well, both for compile fixes (maybe just the same set of modules as the first PR) and runtime issues encountered in testing with the second PR (unsure if all those would go away purely with additional engine changes) Affected modules by round one and their initial syntax fix commits (reverted as part of preparing this issue): * https://github.com/Terasology/MasterOfOreon/commit/17a008952995a354e93f3cc2c71ed14ae8b7adec * https://github.com/Terasology/DynamicCities/commit/bce035463cf156fc7208257f7e4adcb8bbabbf5e * https://github.com/Terasology/Dialogs/commit/3b5804ebf557060c24a53a79760a3d3e80cf1064 * https://github.com/Terasology/Tasks/commit/b1576fb1011f8fd7886ff7a96007ce85d2895e52 * https://github.com/Terasology/TutorialDynamicCities/commit/e5766b8374f53e0d0b9a600f53aa7176f94876d1 * https://github.com/Terasology/LightAndShadow/commit/fa33f332569990f1c6cab306fe52635b402b06a5 To be clear those commits just let those modules compile after the *first* round of serialization changes. It may be pointless to re-apply those changes exactly, as the underlying changes from round two may require entirely different fixes. See the PRs linked in #3489 for more research and discovered issues. As a potential follow-up to this the new Record & Replay system is awaiting some of these changes to better improve its own usage of serialized events and such. @iaronaraujo would be the primary contributor involved in that effort.
1.0
Recover and finish parked serialization / type handling overhaul - See #3489 for background information. @eviltak is behind the parked code related to this and may be interested in continuing with it, but is low on time as of this writing. If anybody else is hugely interested in serialization and type handling feel free to look at this but be warned that it is a large piece of work and a complex topic :-) Goal for this issue is to work with branch https://github.com/MovingBlocks/Terasology/tree/newSerialization to finish stabilizing it so it can be re-merged As part of that work several modules need syntax changes to compile again, as was the case when originally merging #3449. The same modules and likely more beyond that will need changes after #3456 is considered as well, both for compile fixes (maybe just the same set of modules as the first PR) and runtime issues encountered in testing with the second PR (unsure if all those would go away purely with additional engine changes) Affected modules by round one and their initial syntax fix commits (reverted as part of preparing this issue): * https://github.com/Terasology/MasterOfOreon/commit/17a008952995a354e93f3cc2c71ed14ae8b7adec * https://github.com/Terasology/DynamicCities/commit/bce035463cf156fc7208257f7e4adcb8bbabbf5e * https://github.com/Terasology/Dialogs/commit/3b5804ebf557060c24a53a79760a3d3e80cf1064 * https://github.com/Terasology/Tasks/commit/b1576fb1011f8fd7886ff7a96007ce85d2895e52 * https://github.com/Terasology/TutorialDynamicCities/commit/e5766b8374f53e0d0b9a600f53aa7176f94876d1 * https://github.com/Terasology/LightAndShadow/commit/fa33f332569990f1c6cab306fe52635b402b06a5 To be clear those commits just let those modules compile after the *first* round of serialization changes. It may be pointless to re-apply those changes exactly, as the underlying changes from round two may require entirely different fixes. See the PRs linked in #3489 for more research and discovered issues. As a potential follow-up to this the new Record & Replay system is awaiting some of these changes to better improve its own usage of serialized events and such. @iaronaraujo would be the primary contributor involved in that effort.
non_main
recover and finish parked serialization type handling overhaul see for background information eviltak is behind the parked code related to this and may be interested in continuing with it but is low on time as of this writing if anybody else is hugely interested in serialization and type handling feel free to look at this but be warned that it is a large piece of work and a complex topic goal for this issue is to work with branch to finish stabilizing it so it can be re merged as part of that work several modules need syntax changes to compile again as was the case when originally merging the same modules and likely more beyond that will need changes after is considered as well both for compile fixes maybe just the same set of modules as the first pr and runtime issues encountered in testing with the second pr unsure if all those would go away purely with additional engine changes affected modules by round one and their initial syntax fix commits reverted as part of preparing this issue to be clear those commits just let those modules compile after the first round of serialization changes it may be pointless to re apply those changes exactly as the underlying changes from round two may require entirely different fixes see the prs linked in for more research and discovered issues as a potential follow up to this the new record replay system is awaiting some of these changes to better improve its own usage of serialized events and such iaronaraujo would be the primary contributor involved in that effort
0
1,350
5,795,603,970
IssuesEvent
2017-05-02 17:29:14
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
maven_artifact should support version=release.
affects_2.3 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME maven_artifact ##### ANSIBLE VERSION ansible-modules-extras commit cd03f10b9ccb9f972a4cf84bc3e756870257da59 (HEAD -> devel, origin/stable-21, origin/devel, origin/HEAD) ##### SUMMARY A few problems with the maven_artifact module. It supports a version=latest parameter but not a version=release parameter to download the latest release (non snapshot) artifact. The current way the module determines the latest is by retrieving the last version tag in a potentially unordered list. ``` v = xml.xpath("/metadata/versioning/versions/version[last()]/text()") ``` when if fact it should be using the `/metadata/versioning/latest` field. Likewise for release `/metadata/versioning/release`. Further there is no way to know what version was actually downloaded. As by default the destination file (when dest is specified as a directory) uses the string `latest` for the version in the filename. Our current workaround is to use the the get_url module with our repos' [Sonatype REST API](https://repository.sonatype.org/nexus-restlet1x-plugin/default/docs/path__artifact_maven_redirect.html). E.g. ``` - name: Download the latest release of c6-api get_url: url: "{{ mirror }}/service/local/artifact/maven/redirect?r=releases&g={{ group }}&a={{ artifact }}&v=RELEASE&c={{ classifier }}" url_username: "{{ maven_user }}" url_password: "{{ maven_password }}" dest: "/path/to/dir" register: result ``` And parse `result.dest` to get version.
True
maven_artifact should support version=release. - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME maven_artifact ##### ANSIBLE VERSION ansible-modules-extras commit cd03f10b9ccb9f972a4cf84bc3e756870257da59 (HEAD -> devel, origin/stable-21, origin/devel, origin/HEAD) ##### SUMMARY A few problems with the maven_artifact module. It supports a version=latest parameter but not a version=release parameter to download the latest release (non snapshot) artifact. The current way the module determines the latest is by retrieving the last version tag in a potentially unordered list. ``` v = xml.xpath("/metadata/versioning/versions/version[last()]/text()") ``` when if fact it should be using the `/metadata/versioning/latest` field. Likewise for release `/metadata/versioning/release`. Further there is no way to know what version was actually downloaded. As by default the destination file (when dest is specified as a directory) uses the string `latest` for the version in the filename. Our current workaround is to use the the get_url module with our repos' [Sonatype REST API](https://repository.sonatype.org/nexus-restlet1x-plugin/default/docs/path__artifact_maven_redirect.html). E.g. ``` - name: Download the latest release of c6-api get_url: url: "{{ mirror }}/service/local/artifact/maven/redirect?r=releases&g={{ group }}&a={{ artifact }}&v=RELEASE&c={{ classifier }}" url_username: "{{ maven_user }}" url_password: "{{ maven_password }}" dest: "/path/to/dir" register: result ``` And parse `result.dest` to get version.
main
maven artifact should support version release issue type bug report component name maven artifact ansible version ansible modules extras commit head devel origin stable origin devel origin head summary a few problems with the maven artifact module it supports a version latest parameter but not a version release parameter to download the latest release non snapshot artifact the current way the module determines the latest is by retrieving the last version tag in a potentially unordered list v xml xpath metadata versioning versions version text when if fact it should be using the metadata versioning latest field likewise for release metadata versioning release further there is no way to know what version was actually downloaded as by default the destination file when dest is specified as a directory uses the string latest for the version in the filename our current workaround is to use the the get url module with our repos e g name download the latest release of api get url url mirror service local artifact maven redirect r releases g group a artifact v release c classifier url username maven user url password maven password dest path to dir register result and parse result dest to get version
1
171,141
6,479,689,281
IssuesEvent
2017-08-18 11:21:39
kubernetes/dashboard
https://api.github.com/repos/kubernetes/dashboard
closed
Dashboard should not use CreatedByAnnotation
priority/P0
Based on this [announcement](https://groups.google.com/forum/#!msg/kubernetes-dev/juMOsdaCml0/FwVNJA9uBAAJ), `CreatedByAnnotation` will be deprecated in 1.8 in favor of `ControllerRef`. However, the annotation is still used in this repo: [here](https://github.com/kubernetes/dashboard/blob/master/src/app/backend/resource/pod/detail.go#L131) and [here](https://github.com/kubernetes/dashboard/blob/master/src/app/backend/resource/pod/detail.go#L204) There is a necessary change to use `ControllerRef`. Does anyone have bandwidth to work on this?
1.0
Dashboard should not use CreatedByAnnotation - Based on this [announcement](https://groups.google.com/forum/#!msg/kubernetes-dev/juMOsdaCml0/FwVNJA9uBAAJ), `CreatedByAnnotation` will be deprecated in 1.8 in favor of `ControllerRef`. However, the annotation is still used in this repo: [here](https://github.com/kubernetes/dashboard/blob/master/src/app/backend/resource/pod/detail.go#L131) and [here](https://github.com/kubernetes/dashboard/blob/master/src/app/backend/resource/pod/detail.go#L204) There is a necessary change to use `ControllerRef`. Does anyone have bandwidth to work on this?
non_main
dashboard should not use createdbyannotation based on this createdbyannotation will be deprecated in in favor of controllerref however the annotation is still used in this repo and there is a necessary change to use controllerref does anyone have bandwidth to work on this
0
1,502
6,502,351,634
IssuesEvent
2017-08-23 13:22:49
ocaml/opam-repository
https://api.github.com/repos/ocaml/opam-repository
closed
Unable to compile `brotli`.
conf depext needs maintainer action
I can't compile `brotli`. It tries to install some files in `/usr/local/lib` while "building" the package (not during the package installation). /cc @fxfactorial ``` =-=- Processing actions -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= [ERROR] The compilation of brotli failed at "ocaml setup.ml -build". Processing 1/2: [brotli: ocamlfind remove] #=== ERROR while compiling brotli.1.1 =========================================# # opam-version 1.3.0~dev3 (64831e63aec4fb712198f57d53d668bc6d43b0c3) # os linux # command ocaml setup.ml -build # path /home/henry/.opam/4.02.3/build/brotli.1.1 # exit-code 1 # env-file /home/henry/.opam/4.02.3/build/brotli.1.1/brotli-20672-e7966f.env # stdout-file /home/henry/.opam/4.02.3/build/brotli.1.1/brotli-20672-e7966f.out # stderr-file /home/henry/.opam/4.02.3/build/brotli.1.1/brotli-20672-e7966f.err ### stdout ### # make[1]: Entering directory '/home/henry/.opam/4.02.3/build/brotli.1.1/libbrotli' # [...] # CXX brotli/enc/libbrotlienc_la-compress_fragment.lo # CXX brotli/enc/libbrotlienc_la-compress_fragment_two_pass.lo # CXXLD libbrotlienc.la # make[1]: Leaving directory '/home/henry/.opam/4.02.3/build/brotli.1.1/libbrotli' # make[1]: Entering directory '/home/henry/.opam/4.02.3/build/brotli.1.1/libbrotli' # /bin/mkdir -p '/usr/local/lib' # /bin/bash ./libtool --mode=install /usr/bin/install -c libbrotlidec.la libbrotlienc.la '/usr/local/lib' # libtool: install: /usr/bin/install -c .libs/libbrotlidec.so.1.0.0 /usr/local/lib/libbrotlidec.so.1.0.0 # Makefile:552: recipe for target 'install-libLTLIBRARIES' failed # make[1]: Leaving directory '/home/henry/.opam/4.02.3/build/brotli.1.1/libbrotli' # Makefile:1183: recipe for target 'install-am' failed ### stderr ### # [...] # configure.ac:7: installing './compile' # configure.ac:7: installing './config.guess' # configure.ac:7: installing './config.sub' # configure.ac:8: installing './install-sh' # configure.ac:8: installing './missing' # Makefile.am: installing './depcomp' # ar: `u' modifier ignored since `D' is the default (see `U') # ar: `u' modifier ignored since `D' is the default (see `U') # /usr/bin/install: cannot create regular file '/usr/local/lib/libbrotlidec.so.1.0.0': Permission denied # make[1]: *** [install-libLTLIBRARIES] Error 1 # make: *** [install-am] Error 2 # E: Failure("Command 'sh prepare.sh' terminated with error code 2") ```
True
Unable to compile `brotli`. - I can't compile `brotli`. It tries to install some files in `/usr/local/lib` while "building" the package (not during the package installation). /cc @fxfactorial ``` =-=- Processing actions -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= [ERROR] The compilation of brotli failed at "ocaml setup.ml -build". Processing 1/2: [brotli: ocamlfind remove] #=== ERROR while compiling brotli.1.1 =========================================# # opam-version 1.3.0~dev3 (64831e63aec4fb712198f57d53d668bc6d43b0c3) # os linux # command ocaml setup.ml -build # path /home/henry/.opam/4.02.3/build/brotli.1.1 # exit-code 1 # env-file /home/henry/.opam/4.02.3/build/brotli.1.1/brotli-20672-e7966f.env # stdout-file /home/henry/.opam/4.02.3/build/brotli.1.1/brotli-20672-e7966f.out # stderr-file /home/henry/.opam/4.02.3/build/brotli.1.1/brotli-20672-e7966f.err ### stdout ### # make[1]: Entering directory '/home/henry/.opam/4.02.3/build/brotli.1.1/libbrotli' # [...] # CXX brotli/enc/libbrotlienc_la-compress_fragment.lo # CXX brotli/enc/libbrotlienc_la-compress_fragment_two_pass.lo # CXXLD libbrotlienc.la # make[1]: Leaving directory '/home/henry/.opam/4.02.3/build/brotli.1.1/libbrotli' # make[1]: Entering directory '/home/henry/.opam/4.02.3/build/brotli.1.1/libbrotli' # /bin/mkdir -p '/usr/local/lib' # /bin/bash ./libtool --mode=install /usr/bin/install -c libbrotlidec.la libbrotlienc.la '/usr/local/lib' # libtool: install: /usr/bin/install -c .libs/libbrotlidec.so.1.0.0 /usr/local/lib/libbrotlidec.so.1.0.0 # Makefile:552: recipe for target 'install-libLTLIBRARIES' failed # make[1]: Leaving directory '/home/henry/.opam/4.02.3/build/brotli.1.1/libbrotli' # Makefile:1183: recipe for target 'install-am' failed ### stderr ### # [...] # configure.ac:7: installing './compile' # configure.ac:7: installing './config.guess' # configure.ac:7: installing './config.sub' # configure.ac:8: installing './install-sh' # configure.ac:8: installing './missing' # Makefile.am: installing './depcomp' # ar: `u' modifier ignored since `D' is the default (see `U') # ar: `u' modifier ignored since `D' is the default (see `U') # /usr/bin/install: cannot create regular file '/usr/local/lib/libbrotlidec.so.1.0.0': Permission denied # make[1]: *** [install-libLTLIBRARIES] Error 1 # make: *** [install-am] Error 2 # E: Failure("Command 'sh prepare.sh' terminated with error code 2") ```
main
unable to compile brotli i can t compile brotli it tries to install some files in usr local lib while building the package not during the package installation cc fxfactorial processing actions the compilation of brotli failed at ocaml setup ml build processing error while compiling brotli opam version os linux command ocaml setup ml build path home henry opam build brotli exit code env file home henry opam build brotli brotli env stdout file home henry opam build brotli brotli out stderr file home henry opam build brotli brotli err stdout make entering directory home henry opam build brotli libbrotli cxx brotli enc libbrotlienc la compress fragment lo cxx brotli enc libbrotlienc la compress fragment two pass lo cxxld libbrotlienc la make leaving directory home henry opam build brotli libbrotli make entering directory home henry opam build brotli libbrotli bin mkdir p usr local lib bin bash libtool mode install usr bin install c libbrotlidec la libbrotlienc la usr local lib libtool install usr bin install c libs libbrotlidec so usr local lib libbrotlidec so makefile recipe for target install libltlibraries failed make leaving directory home henry opam build brotli libbrotli makefile recipe for target install am failed stderr configure ac installing compile configure ac installing config guess configure ac installing config sub configure ac installing install sh configure ac installing missing makefile am installing depcomp ar u modifier ignored since d is the default see u ar u modifier ignored since d is the default see u usr bin install cannot create regular file usr local lib libbrotlidec so permission denied make error make error e failure command sh prepare sh terminated with error code
1
235,722
19,423,960,506
IssuesEvent
2021-12-21 01:18:06
trinodb/trino
https://api.github.com/repos/trinodb/trino
opened
Flaky TestHiveFailureRecoveryMinIO.testExplainAnalyze
test
* https://github.com/trinodb/trino/runs/4586900892 * https://github.com/trinodb/trino/runs/4588858574 ``` 2021-12-21T00:15:56.7628764Z [ERROR] Tests run: 44, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2,429.722 s <<< FAILURE! - in TestSuite 2021-12-21T00:15:56.7631052Z [ERROR] io.trino.plugin.hive.TestHiveFailureRecoveryMinIO.testExplainAnalyze Time elapsed: 67.267 s <<< FAILURE! 2021-12-21T00:15:56.7633083Z java.lang.AssertionError: 2021-12-21T00:15:56.7633520Z 2021-12-21T00:15:56.7634096Z Expecting throwable message: 2021-12-21T00:15:56.7635740Z <"Error closing remote buffer (http://127.0.0.1:41155/v1/task/20211220_234551_00323_w4uu6.1.0.0/results/0 - 3 failures, failure duration 10.05s, total failed request time 15.06s)"> 2021-12-21T00:15:56.7636829Z to contain: 2021-12-21T00:15:56.7637445Z <"Encountered too many errors talking to a worker node"> 2021-12-21T00:15:56.7638022Z but did not. 2021-12-21T00:15:56.7638292Z 2021-12-21T00:15:56.7638739Z Throwable that failed the check: 2021-12-21T00:15:56.7639098Z 2021-12-21T00:15:56.7640705Z java.lang.RuntimeException: Error closing remote buffer (http://127.0.0.1:41155/v1/task/20211220_234551_00323_w4uu6.1.0.0/results/0 - 3 failures, failure duration 10.05s, total failed request time 15.06s) 2021-12-21T00:15:56.7642871Z at io.trino.testing.AbstractTestingTrinoClient.execute(AbstractTestingTrinoClient.java:122) 2021-12-21T00:15:56.7645706Z at io.trino.testing.DistributedQueryRunner.executeWithQueryId(DistributedQueryRunner.java:520) 2021-12-21T00:15:56.7648492Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.execute(AbstractTestFailureRecovery.java:542) 2021-12-21T00:15:56.7651423Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.executeActual(AbstractTestFailureRecovery.java:528) 2021-12-21T00:15:56.7654779Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.executeActualNoRetries(AbstractTestFailureRecovery.java:513) 2021-12-21T00:15:56.7658082Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.lambda$failsWithoutRetries$8(AbstractTestFailureRecovery.java:644) 2021-12-21T00:15:56.7664482Z at org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:62) 2021-12-21T00:15:56.7666723Z at org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:877) 2021-12-21T00:15:56.7668770Z at org.assertj.core.api.Assertions.catchThrowable(Assertions.java:1306) 2021-12-21T00:15:56.7671010Z at org.assertj.core.api.Assertions.assertThatThrownBy(Assertions.java:1178) 2021-12-21T00:15:56.7674115Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.failsWithoutRetries(AbstractTestFailureRecovery.java:644) 2021-12-21T00:15:56.7676918Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:209) 2021-12-21T00:15:56.7679502Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:166) 2021-12-21T00:15:56.7683201Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:161) 2021-12-21T00:15:56.7686391Z at io.trino.testing.AbstractTestFailureRecovery.testExplainAnalyze(AbstractTestFailureRecovery.java:302) 2021-12-21T00:15:56.7688925Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2021-12-21T00:15:56.7691529Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2021-12-21T00:15:56.7694630Z at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2021-12-21T00:15:56.7696623Z at java.base/java.lang.reflect.Method.invoke(Method.java:566) 2021-12-21T00:15:56.7698625Z at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104) 2021-12-21T00:15:56.7707833Z at org.testng.internal.Invoker.invokeMethod(Invoker.java:645) 2021-12-21T00:15:56.7712486Z at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851) 2021-12-21T00:15:56.7714328Z at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177) 2021-12-21T00:15:56.7716934Z at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129) 2021-12-21T00:15:56.7719521Z at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112) 2021-12-21T00:15:56.7721746Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 2021-12-21T00:15:56.7723653Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 2021-12-21T00:15:56.7746022Z at java.base/java.lang.Thread.run(Thread.java:829) 2021-12-21T00:15:56.7748265Z Caused by: io.trino.spi.TrinoTransportException: Error closing remote buffer (http://127.0.0.1:41155/v1/task/20211220_234551_00323_w4uu6.1.0.0/results/0 - 3 failures, failure duration 10.05s, total failed request time 15.06s) 2021-12-21T00:15:56.7750285Z at io.trino.operator.HttpPageBufferClient$2.onFailure(HttpPageBufferClient.java:523) 2021-12-21T00:15:56.7785280Z at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1074) 2021-12-21T00:15:56.7787788Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 2021-12-21T00:15:56.7790765Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 2021-12-21T00:15:56.7792396Z at java.base/java.lang.Thread.run(Thread.java:829) 2021-12-21T00:15:56.7794392Z Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException: Idle timeout 5000 ms 2021-12-21T00:15:56.7796426Z at io.airlift.http.client.ResponseHandlerUtils.propagate(ResponseHandlerUtils.java:25) 2021-12-21T00:15:56.7810408Z at io.airlift.http.client.StatusResponseHandler.handleException(StatusResponseHandler.java:45) 2021-12-21T00:15:56.7813228Z at io.airlift.http.client.StatusResponseHandler.handleException(StatusResponseHandler.java:28) 2021-12-21T00:15:56.7815781Z at io.airlift.http.client.jetty.JettyResponseFuture.failed(JettyResponseFuture.java:120) 2021-12-21T00:15:56.7818782Z at io.airlift.http.client.jetty.BufferingResponseListener.onComplete(BufferingResponseListener.java:85) 2021-12-21T00:15:56.7829536Z at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:218) 2021-12-21T00:15:56.7832668Z at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:210) 2021-12-21T00:15:56.7835298Z at org.eclipse.jetty.client.HttpReceiver.terminateResponse(HttpReceiver.java:481) 2021-12-21T00:15:56.7838299Z at org.eclipse.jetty.client.HttpReceiver.terminateResponse(HttpReceiver.java:461) 2021-12-21T00:15:56.7840405Z at org.eclipse.jetty.client.HttpReceiver.abort(HttpReceiver.java:557) 2021-12-21T00:15:56.7842189Z at org.eclipse.jetty.client.HttpChannel.abortResponse(HttpChannel.java:152) 2021-12-21T00:15:56.7843936Z at org.eclipse.jetty.client.HttpChannel.abort(HttpChannel.java:145) 2021-12-21T00:15:56.7845511Z at org.eclipse.jetty.client.HttpExchange.abort(HttpExchange.java:264) 2021-12-21T00:15:56.7848089Z at org.eclipse.jetty.client.HttpConversation.abort(HttpConversation.java:164) 2021-12-21T00:15:56.7849844Z at org.eclipse.jetty.client.HttpRequest.abort(HttpRequest.java:819) 2021-12-21T00:15:56.7851970Z at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.abort(HttpConnectionOverHTTP.java:214) 2021-12-21T00:15:56.7858719Z at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.close(HttpConnectionOverHTTP.java:200) 2021-12-21T00:15:56.7864943Z at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.onIdleExpired(HttpConnectionOverHTTP.java:160) 2021-12-21T00:15:56.7867869Z at org.eclipse.jetty.io.AbstractEndPoint.onIdleExpired(AbstractEndPoint.java:402) 2021-12-21T00:15:56.7869871Z at org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:171) 2021-12-21T00:15:56.7871585Z at org.eclipse.jetty.io.IdleTimeout.idleCheck(IdleTimeout.java:113) 2021-12-21T00:15:56.7873235Z at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 2021-12-21T00:15:56.7874597Z at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 2021-12-21T00:15:56.7876607Z at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) 2021-12-21T00:15:56.7878389Z ... 3 more 2021-12-21T00:15:56.7879247Z Caused by: java.util.concurrent.TimeoutException: Idle timeout 5000 ms 2021-12-21T00:15:56.7880058Z ... 10 more 2021-12-21T00:15:56.7880367Z 2021-12-21T00:15:56.7882542Z at io.trino.testing.AbstractTestFailureRecovery.lambda$testSelect$7(AbstractTestFailureRecovery.java:209) 2021-12-21T00:15:56.7885474Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.failsWithoutRetries(AbstractTestFailureRecovery.java:644) 2021-12-21T00:15:56.7888446Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:209) 2021-12-21T00:15:56.7891199Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:166) 2021-12-21T00:15:56.7894244Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:161) 2021-12-21T00:15:56.7897450Z at io.trino.testing.AbstractTestFailureRecovery.testExplainAnalyze(AbstractTestFailureRecovery.java:302) 2021-12-21T00:15:56.7900130Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2021-12-21T00:15:56.7902549Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2021-12-21T00:15:56.7905538Z at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2021-12-21T00:15:56.7907714Z at java.base/java.lang.reflect.Method.invoke(Method.java:566) 2021-12-21T00:15:56.7909486Z at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104) 2021-12-21T00:15:56.7911369Z at org.testng.internal.Invoker.invokeMethod(Invoker.java:645) 2021-12-21T00:15:56.7912789Z at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851) 2021-12-21T00:15:56.7914333Z at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177) 2021-12-21T00:15:56.7917005Z at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129) 2021-12-21T00:15:56.7918973Z at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112) 2021-12-21T00:15:56.7921680Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 2021-12-21T00:15:56.7923800Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 2021-12-21T00:15:56.7925124Z at java.base/java.lang.Thread.run(Thread.java:829) 2021-12-21T00:15:56.7925610Z 2021-12-21T00:15:57.4192352Z [INFO] 2021-12-21T00:15:57.4193783Z [INFO] Results: 2021-12-21T00:15:57.4194569Z [INFO] 2021-12-21T00:15:57.4195457Z [ERROR] Failures: 2021-12-21T00:15:57.4234872Z [ERROR] TestHiveFailureRecoveryMinIO>AbstractTestFailureRecovery.testExplainAnalyze:302->AbstractTestFailureRecovery.testSelect:161->AbstractTestFailureRecovery.testSelect:166->AbstractTestFailureRecovery.testSelect:209->AbstractTestFailureRecovery.lambda$testSelect$7:209 2021-12-21T00:15:57.4239890Z Expecting throwable message: 2021-12-21T00:15:57.4241809Z <"Error closing remote buffer (http://127.0.0.1:41155/v1/task/20211220_234551_00323_w4uu6.1.0.0/results/0 - 3 failures, failure duration 10.05s, total failed request time 15.06s)"> 2021-12-21T00:15:57.4242888Z to contain: 2021-12-21T00:15:57.4243637Z <"Encountered too many errors talking to a worker node"> 2021-12-21T00:15:57.4244573Z but did not. ```
1.0
Flaky TestHiveFailureRecoveryMinIO.testExplainAnalyze - * https://github.com/trinodb/trino/runs/4586900892 * https://github.com/trinodb/trino/runs/4588858574 ``` 2021-12-21T00:15:56.7628764Z [ERROR] Tests run: 44, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2,429.722 s <<< FAILURE! - in TestSuite 2021-12-21T00:15:56.7631052Z [ERROR] io.trino.plugin.hive.TestHiveFailureRecoveryMinIO.testExplainAnalyze Time elapsed: 67.267 s <<< FAILURE! 2021-12-21T00:15:56.7633083Z java.lang.AssertionError: 2021-12-21T00:15:56.7633520Z 2021-12-21T00:15:56.7634096Z Expecting throwable message: 2021-12-21T00:15:56.7635740Z <"Error closing remote buffer (http://127.0.0.1:41155/v1/task/20211220_234551_00323_w4uu6.1.0.0/results/0 - 3 failures, failure duration 10.05s, total failed request time 15.06s)"> 2021-12-21T00:15:56.7636829Z to contain: 2021-12-21T00:15:56.7637445Z <"Encountered too many errors talking to a worker node"> 2021-12-21T00:15:56.7638022Z but did not. 2021-12-21T00:15:56.7638292Z 2021-12-21T00:15:56.7638739Z Throwable that failed the check: 2021-12-21T00:15:56.7639098Z 2021-12-21T00:15:56.7640705Z java.lang.RuntimeException: Error closing remote buffer (http://127.0.0.1:41155/v1/task/20211220_234551_00323_w4uu6.1.0.0/results/0 - 3 failures, failure duration 10.05s, total failed request time 15.06s) 2021-12-21T00:15:56.7642871Z at io.trino.testing.AbstractTestingTrinoClient.execute(AbstractTestingTrinoClient.java:122) 2021-12-21T00:15:56.7645706Z at io.trino.testing.DistributedQueryRunner.executeWithQueryId(DistributedQueryRunner.java:520) 2021-12-21T00:15:56.7648492Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.execute(AbstractTestFailureRecovery.java:542) 2021-12-21T00:15:56.7651423Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.executeActual(AbstractTestFailureRecovery.java:528) 2021-12-21T00:15:56.7654779Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.executeActualNoRetries(AbstractTestFailureRecovery.java:513) 2021-12-21T00:15:56.7658082Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.lambda$failsWithoutRetries$8(AbstractTestFailureRecovery.java:644) 2021-12-21T00:15:56.7664482Z at org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:62) 2021-12-21T00:15:56.7666723Z at org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:877) 2021-12-21T00:15:56.7668770Z at org.assertj.core.api.Assertions.catchThrowable(Assertions.java:1306) 2021-12-21T00:15:56.7671010Z at org.assertj.core.api.Assertions.assertThatThrownBy(Assertions.java:1178) 2021-12-21T00:15:56.7674115Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.failsWithoutRetries(AbstractTestFailureRecovery.java:644) 2021-12-21T00:15:56.7676918Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:209) 2021-12-21T00:15:56.7679502Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:166) 2021-12-21T00:15:56.7683201Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:161) 2021-12-21T00:15:56.7686391Z at io.trino.testing.AbstractTestFailureRecovery.testExplainAnalyze(AbstractTestFailureRecovery.java:302) 2021-12-21T00:15:56.7688925Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2021-12-21T00:15:56.7691529Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2021-12-21T00:15:56.7694630Z at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2021-12-21T00:15:56.7696623Z at java.base/java.lang.reflect.Method.invoke(Method.java:566) 2021-12-21T00:15:56.7698625Z at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104) 2021-12-21T00:15:56.7707833Z at org.testng.internal.Invoker.invokeMethod(Invoker.java:645) 2021-12-21T00:15:56.7712486Z at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851) 2021-12-21T00:15:56.7714328Z at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177) 2021-12-21T00:15:56.7716934Z at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129) 2021-12-21T00:15:56.7719521Z at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112) 2021-12-21T00:15:56.7721746Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 2021-12-21T00:15:56.7723653Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 2021-12-21T00:15:56.7746022Z at java.base/java.lang.Thread.run(Thread.java:829) 2021-12-21T00:15:56.7748265Z Caused by: io.trino.spi.TrinoTransportException: Error closing remote buffer (http://127.0.0.1:41155/v1/task/20211220_234551_00323_w4uu6.1.0.0/results/0 - 3 failures, failure duration 10.05s, total failed request time 15.06s) 2021-12-21T00:15:56.7750285Z at io.trino.operator.HttpPageBufferClient$2.onFailure(HttpPageBufferClient.java:523) 2021-12-21T00:15:56.7785280Z at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1074) 2021-12-21T00:15:56.7787788Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 2021-12-21T00:15:56.7790765Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 2021-12-21T00:15:56.7792396Z at java.base/java.lang.Thread.run(Thread.java:829) 2021-12-21T00:15:56.7794392Z Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException: Idle timeout 5000 ms 2021-12-21T00:15:56.7796426Z at io.airlift.http.client.ResponseHandlerUtils.propagate(ResponseHandlerUtils.java:25) 2021-12-21T00:15:56.7810408Z at io.airlift.http.client.StatusResponseHandler.handleException(StatusResponseHandler.java:45) 2021-12-21T00:15:56.7813228Z at io.airlift.http.client.StatusResponseHandler.handleException(StatusResponseHandler.java:28) 2021-12-21T00:15:56.7815781Z at io.airlift.http.client.jetty.JettyResponseFuture.failed(JettyResponseFuture.java:120) 2021-12-21T00:15:56.7818782Z at io.airlift.http.client.jetty.BufferingResponseListener.onComplete(BufferingResponseListener.java:85) 2021-12-21T00:15:56.7829536Z at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:218) 2021-12-21T00:15:56.7832668Z at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:210) 2021-12-21T00:15:56.7835298Z at org.eclipse.jetty.client.HttpReceiver.terminateResponse(HttpReceiver.java:481) 2021-12-21T00:15:56.7838299Z at org.eclipse.jetty.client.HttpReceiver.terminateResponse(HttpReceiver.java:461) 2021-12-21T00:15:56.7840405Z at org.eclipse.jetty.client.HttpReceiver.abort(HttpReceiver.java:557) 2021-12-21T00:15:56.7842189Z at org.eclipse.jetty.client.HttpChannel.abortResponse(HttpChannel.java:152) 2021-12-21T00:15:56.7843936Z at org.eclipse.jetty.client.HttpChannel.abort(HttpChannel.java:145) 2021-12-21T00:15:56.7845511Z at org.eclipse.jetty.client.HttpExchange.abort(HttpExchange.java:264) 2021-12-21T00:15:56.7848089Z at org.eclipse.jetty.client.HttpConversation.abort(HttpConversation.java:164) 2021-12-21T00:15:56.7849844Z at org.eclipse.jetty.client.HttpRequest.abort(HttpRequest.java:819) 2021-12-21T00:15:56.7851970Z at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.abort(HttpConnectionOverHTTP.java:214) 2021-12-21T00:15:56.7858719Z at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.close(HttpConnectionOverHTTP.java:200) 2021-12-21T00:15:56.7864943Z at org.eclipse.jetty.client.http.HttpConnectionOverHTTP.onIdleExpired(HttpConnectionOverHTTP.java:160) 2021-12-21T00:15:56.7867869Z at org.eclipse.jetty.io.AbstractEndPoint.onIdleExpired(AbstractEndPoint.java:402) 2021-12-21T00:15:56.7869871Z at org.eclipse.jetty.io.IdleTimeout.checkIdleTimeout(IdleTimeout.java:171) 2021-12-21T00:15:56.7871585Z at org.eclipse.jetty.io.IdleTimeout.idleCheck(IdleTimeout.java:113) 2021-12-21T00:15:56.7873235Z at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 2021-12-21T00:15:56.7874597Z at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 2021-12-21T00:15:56.7876607Z at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) 2021-12-21T00:15:56.7878389Z ... 3 more 2021-12-21T00:15:56.7879247Z Caused by: java.util.concurrent.TimeoutException: Idle timeout 5000 ms 2021-12-21T00:15:56.7880058Z ... 10 more 2021-12-21T00:15:56.7880367Z 2021-12-21T00:15:56.7882542Z at io.trino.testing.AbstractTestFailureRecovery.lambda$testSelect$7(AbstractTestFailureRecovery.java:209) 2021-12-21T00:15:56.7885474Z at io.trino.testing.AbstractTestFailureRecovery$FailureRecoveryAssert.failsWithoutRetries(AbstractTestFailureRecovery.java:644) 2021-12-21T00:15:56.7888446Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:209) 2021-12-21T00:15:56.7891199Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:166) 2021-12-21T00:15:56.7894244Z at io.trino.testing.AbstractTestFailureRecovery.testSelect(AbstractTestFailureRecovery.java:161) 2021-12-21T00:15:56.7897450Z at io.trino.testing.AbstractTestFailureRecovery.testExplainAnalyze(AbstractTestFailureRecovery.java:302) 2021-12-21T00:15:56.7900130Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2021-12-21T00:15:56.7902549Z at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2021-12-21T00:15:56.7905538Z at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2021-12-21T00:15:56.7907714Z at java.base/java.lang.reflect.Method.invoke(Method.java:566) 2021-12-21T00:15:56.7909486Z at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104) 2021-12-21T00:15:56.7911369Z at org.testng.internal.Invoker.invokeMethod(Invoker.java:645) 2021-12-21T00:15:56.7912789Z at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851) 2021-12-21T00:15:56.7914333Z at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177) 2021-12-21T00:15:56.7917005Z at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129) 2021-12-21T00:15:56.7918973Z at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112) 2021-12-21T00:15:56.7921680Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 2021-12-21T00:15:56.7923800Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 2021-12-21T00:15:56.7925124Z at java.base/java.lang.Thread.run(Thread.java:829) 2021-12-21T00:15:56.7925610Z 2021-12-21T00:15:57.4192352Z [INFO] 2021-12-21T00:15:57.4193783Z [INFO] Results: 2021-12-21T00:15:57.4194569Z [INFO] 2021-12-21T00:15:57.4195457Z [ERROR] Failures: 2021-12-21T00:15:57.4234872Z [ERROR] TestHiveFailureRecoveryMinIO>AbstractTestFailureRecovery.testExplainAnalyze:302->AbstractTestFailureRecovery.testSelect:161->AbstractTestFailureRecovery.testSelect:166->AbstractTestFailureRecovery.testSelect:209->AbstractTestFailureRecovery.lambda$testSelect$7:209 2021-12-21T00:15:57.4239890Z Expecting throwable message: 2021-12-21T00:15:57.4241809Z <"Error closing remote buffer (http://127.0.0.1:41155/v1/task/20211220_234551_00323_w4uu6.1.0.0/results/0 - 3 failures, failure duration 10.05s, total failed request time 15.06s)"> 2021-12-21T00:15:57.4242888Z to contain: 2021-12-21T00:15:57.4243637Z <"Encountered too many errors talking to a worker node"> 2021-12-21T00:15:57.4244573Z but did not. ```
non_main
flaky testhivefailurerecoveryminio testexplainanalyze tests run failures errors skipped time elapsed s failure in testsuite io trino plugin hive testhivefailurerecoveryminio testexplainanalyze time elapsed s failure java lang assertionerror expecting throwable message to contain but did not throwable that failed the check java lang runtimeexception error closing remote buffer failures failure duration total failed request time at io trino testing abstracttestingtrinoclient execute abstracttestingtrinoclient java at io trino testing distributedqueryrunner executewithqueryid distributedqueryrunner java at io trino testing abstracttestfailurerecovery failurerecoveryassert execute abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery failurerecoveryassert executeactual abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery failurerecoveryassert executeactualnoretries abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery failurerecoveryassert lambda failswithoutretries abstracttestfailurerecovery java at org assertj core api throwableassert catchthrowable throwableassert java at org assertj core api assertionsforclasstypes catchthrowable assertionsforclasstypes java at org assertj core api assertions catchthrowable assertions java at org assertj core api assertions assertthatthrownby assertions java at io trino testing abstracttestfailurerecovery failurerecoveryassert failswithoutretries abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery testselect abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery testselect abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery testselect abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery testexplainanalyze abstracttestfailurerecovery java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invoker invokemethod invoker java at org testng internal invoker invoketestmethod invoker java at org testng internal invoker invoketestmethods invoker java at org testng internal testmethodworker invoketestmethods testmethodworker java at org testng internal testmethodworker run testmethodworker java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by io trino spi trinotransportexception error closing remote buffer failures failure duration total failed request time at io trino operator httppagebufferclient onfailure httppagebufferclient java at com google common util concurrent futures callbacklistener run futures java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by java lang runtimeexception java util concurrent timeoutexception idle timeout ms at io airlift http client responsehandlerutils propagate responsehandlerutils java at io airlift http client statusresponsehandler handleexception statusresponsehandler java at io airlift http client statusresponsehandler handleexception statusresponsehandler java at io airlift http client jetty jettyresponsefuture failed jettyresponsefuture java at io airlift http client jetty bufferingresponselistener oncomplete bufferingresponselistener java at org eclipse jetty client responsenotifier notifycomplete responsenotifier java at org eclipse jetty client responsenotifier notifycomplete responsenotifier java at org eclipse jetty client httpreceiver terminateresponse httpreceiver java at org eclipse jetty client httpreceiver terminateresponse httpreceiver java at org eclipse jetty client httpreceiver abort httpreceiver java at org eclipse jetty client httpchannel abortresponse httpchannel java at org eclipse jetty client httpchannel abort httpchannel java at org eclipse jetty client httpexchange abort httpexchange java at org eclipse jetty client httpconversation abort httpconversation java at org eclipse jetty client httprequest abort httprequest java at org eclipse jetty client http httpconnectionoverhttp abort httpconnectionoverhttp java at org eclipse jetty client http httpconnectionoverhttp close httpconnectionoverhttp java at org eclipse jetty client http httpconnectionoverhttp onidleexpired httpconnectionoverhttp java at org eclipse jetty io abstractendpoint onidleexpired abstractendpoint java at org eclipse jetty io idletimeout checkidletimeout idletimeout java at org eclipse jetty io idletimeout idlecheck idletimeout java at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at java base java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java more caused by java util concurrent timeoutexception idle timeout ms more at io trino testing abstracttestfailurerecovery lambda testselect abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery failurerecoveryassert failswithoutretries abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery testselect abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery testselect abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery testselect abstracttestfailurerecovery java at io trino testing abstracttestfailurerecovery testexplainanalyze abstracttestfailurerecovery java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invoker invokemethod invoker java at org testng internal invoker invoketestmethod invoker java at org testng internal invoker invoketestmethods invoker java at org testng internal testmethodworker invoketestmethods testmethodworker java at org testng internal testmethodworker run testmethodworker java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java results failures testhivefailurerecoveryminio abstracttestfailurerecovery testexplainanalyze abstracttestfailurerecovery testselect abstracttestfailurerecovery testselect abstracttestfailurerecovery testselect abstracttestfailurerecovery lambda testselect expecting throwable message to contain but did not
0
644,516
20,979,666,618
IssuesEvent
2022-03-28 18:34:36
bcgov/foi-flow
https://api.github.com/repos/bcgov/foi-flow
closed
Divisional Tracking Triggering Save Change Prompt
bug high priority
**Describe the bug in current situation** When a Ministry coordinator first goes into a file, and sets themselves as assignee, and then attempts to add multiple divisions - each additional division added will generate a save change prompt. If the user clicks OK - the page refreshes and all changes are lost, if they hit 'Cancel' they are able to continue. **Link bug to the User Story** **Impact of this bug** Users will get confused as to why the prompt goes up, if they press OK they will lose all changes **Chance of Occurring (high/medium/low/very low)** High - many requests go to more than one division **Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?** **Steps to Reproduce** Steps to reproduce the behavior: 1. Go to an unassigned request in Call for Records as a Ministry User 2. Click on Assigned to - and select self 3. Scroll down to Divisional tracking, and attempt to add a divsion 4. Save change prompt should appear for each subsequent division added. <img width="1154" alt="Screen Shot 2022-03-14 at 10 21 00 AM" src="https://user-images.githubusercontent.com/12040839/158226329-7d3d4a3f-b197-4a00-b115-afcb10e4bfe6.png"> **Actual/ observed behaviour/ results** **Expected behaviour** Save change prompt should not appear as the user is not navigating away from the page. **Screenshots/ Visual Reference/ Source** If applicable, add screenshots to help explain your problem. You an use screengrab.
1.0
Divisional Tracking Triggering Save Change Prompt - **Describe the bug in current situation** When a Ministry coordinator first goes into a file, and sets themselves as assignee, and then attempts to add multiple divisions - each additional division added will generate a save change prompt. If the user clicks OK - the page refreshes and all changes are lost, if they hit 'Cancel' they are able to continue. **Link bug to the User Story** **Impact of this bug** Users will get confused as to why the prompt goes up, if they press OK they will lose all changes **Chance of Occurring (high/medium/low/very low)** High - many requests go to more than one division **Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?** **Steps to Reproduce** Steps to reproduce the behavior: 1. Go to an unassigned request in Call for Records as a Ministry User 2. Click on Assigned to - and select self 3. Scroll down to Divisional tracking, and attempt to add a divsion 4. Save change prompt should appear for each subsequent division added. <img width="1154" alt="Screen Shot 2022-03-14 at 10 21 00 AM" src="https://user-images.githubusercontent.com/12040839/158226329-7d3d4a3f-b197-4a00-b115-afcb10e4bfe6.png"> **Actual/ observed behaviour/ results** **Expected behaviour** Save change prompt should not appear as the user is not navigating away from the page. **Screenshots/ Visual Reference/ Source** If applicable, add screenshots to help explain your problem. You an use screengrab.
non_main
divisional tracking triggering save change prompt describe the bug in current situation when a ministry coordinator first goes into a file and sets themselves as assignee and then attempts to add multiple divisions each additional division added will generate a save change prompt if the user clicks ok the page refreshes and all changes are lost if they hit cancel they are able to continue link bug to the user story impact of this bug users will get confused as to why the prompt goes up if they press ok they will lose all changes chance of occurring high medium low very low high many requests go to more than one division pre conditions which env any pre requesites or assumptions to execute steps steps to reproduce steps to reproduce the behavior go to an unassigned request in call for records as a ministry user click on assigned to and select self scroll down to divisional tracking and attempt to add a divsion save change prompt should appear for each subsequent division added img width alt screen shot at am src actual observed behaviour results expected behaviour save change prompt should not appear as the user is not navigating away from the page screenshots visual reference source if applicable add screenshots to help explain your problem you an use screengrab
0
1,732
6,574,849,475
IssuesEvent
2017-09-11 14:16:53
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
apt_repository: doc generation confused by file mode - interpret it as an octal value
affects_2.1 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt_repository ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ( Debian stretch , irrelevant here) ##### SUMMARY The ansible docs website tells to set '420' as default mode for apt sources.list.d files. Which is wrong per debian apt breaks on this. Luckily this is only in the generated documentation, https://docs.ansible.com/ansible/apt_repository_module.html#options the documentation from the sources is fine but misses quotes around '0644' mode value so the docs web page generation does not interpret this as an octal value (0644 octal equals 420 decimal). https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/apt_repository.py#L48 ##### STEPS TO REPRODUCE ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options ``` ##### EXPECTED RESULTS ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options and read 0644 as default mode value for apt sources. ``` ##### ACTUAL RESULTS ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options and read 420 as default mode value for apt sources. ```
True
apt_repository: doc generation confused by file mode - interpret it as an octal value - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME apt_repository ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT ( Debian stretch , irrelevant here) ##### SUMMARY The ansible docs website tells to set '420' as default mode for apt sources.list.d files. Which is wrong per debian apt breaks on this. Luckily this is only in the generated documentation, https://docs.ansible.com/ansible/apt_repository_module.html#options the documentation from the sources is fine but misses quotes around '0644' mode value so the docs web page generation does not interpret this as an octal value (0644 octal equals 420 decimal). https://github.com/ansible/ansible-modules-core/blob/devel/packaging/os/apt_repository.py#L48 ##### STEPS TO REPRODUCE ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options ``` ##### EXPECTED RESULTS ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options and read 0644 as default mode value for apt sources. ``` ##### ACTUAL RESULTS ``` Open https://docs.ansible.com/ansible/apt_repository_module.html#options and read 420 as default mode value for apt sources. ```
main
apt repository doc generation confused by file mode interpret it as an octal value issue type bug report component name apt repository ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration os environment debian stretch irrelevant here summary the ansible docs website tells to set as default mode for apt sources list d files which is wrong per debian apt breaks on this luckily this is only in the generated documentation the documentation from the sources is fine but misses quotes around mode value so the docs web page generation does not interpret this as an octal value octal equals decimal steps to reproduce open expected results open and read as default mode value for apt sources actual results open and read as default mode value for apt sources
1
99,906
21,056,505,282
IssuesEvent
2022-04-01 04:14:59
appsmithorg/appsmith
https://api.github.com/repos/appsmithorg/appsmith
reopened
[Bug]: BinData from MongoDB is not recognized
Bug Actions Pod Needs Triaging Mongo BE Coders Pod
### Is there an existing issue for this? - [X] I have searched the existing issues ### Description When I query MongoDB from Appsmith and try to use BinData the editor does not recognize the function. Photo provided by discord user Fouad#2407: ![image](https://user-images.githubusercontent.com/101155659/160883470-18bc3541-20aa-47e5-84a9-148d96cad011.png) I expect BinData to work as intended inside of MongoDB queries. ### Steps To Reproduce 1. Create a new Appsmith project. 2. Add a MongoDB data source. 3. Use `new BinData()` during the query. 4. Observe that the function is not recognized. ### Public Sample App _No response_ ### Version Cloud 1.6.17
1.0
[Bug]: BinData from MongoDB is not recognized - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Description When I query MongoDB from Appsmith and try to use BinData the editor does not recognize the function. Photo provided by discord user Fouad#2407: ![image](https://user-images.githubusercontent.com/101155659/160883470-18bc3541-20aa-47e5-84a9-148d96cad011.png) I expect BinData to work as intended inside of MongoDB queries. ### Steps To Reproduce 1. Create a new Appsmith project. 2. Add a MongoDB data source. 3. Use `new BinData()` during the query. 4. Observe that the function is not recognized. ### Public Sample App _No response_ ### Version Cloud 1.6.17
non_main
bindata from mongodb is not recognized is there an existing issue for this i have searched the existing issues description when i query mongodb from appsmith and try to use bindata the editor does not recognize the function photo provided by discord user fouad i expect bindata to work as intended inside of mongodb queries steps to reproduce create a new appsmith project add a mongodb data source use new bindata during the query observe that the function is not recognized public sample app no response version cloud
0
35,711
2,792,924,685
IssuesEvent
2015-05-11 07:15:14
CheckiO/checkio-empire-battle
https://api.github.com/repos/CheckiO/checkio-empire-battle
closed
Units can shoot at building at edge, not only at center
complex:middle enhancement priority:high
Units try to shoot at center, but can shoot at any point of building
1.0
Units can shoot at building at edge, not only at center - Units try to shoot at center, but can shoot at any point of building
non_main
units can shoot at building at edge not only at center units try to shoot at center but can shoot at any point of building
0
22,354
31,030,430,964
IssuesEvent
2023-08-10 12:07:19
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Error when using column reference as the second argument of the contains function
Type:Bug Priority:P2 .Backend .Team/QueryProcessor :hammer_and_wrench:
### Describe the bug It looks like Metabase doesn't accepts column reference as a source for the second argument of the `contains` function. When creating a custom column with an expression like `case(contains([Display name], [Given name]), "yes", "no")`, the question returns an error `Input to update-string-value does not match schema:  [(named [(named (not (= :value :field)) :value) nil nil] value) nil] `. ### To Reproduce Fom a sample `users` table 1. Create a question 2. Add a custom column with an expression such as `case(contains([Display name], [Given name]), "yes", "no")` 3. Show the preview 4. Returns an error `Input to update-string-value does not match schema:  [(named [(named (not (= :value :field)) :value) nil nil] value) nil] ` ### Expected behavior The expression should accept referenced columns for both contains parameters ### Logs {:database_id 2, :started_at #t "2023-06-02T10:42:44.943921Z[GMT]", :error_type :invalid-query, :json_query {:database 2, :query {:source-table 15, :expressions {:calc ["case" [[["contains" ["field" 1374 nil] ["field" 1372 nil]] "yes"]] {:default "no"}]}, :fields [["field" 1376 nil] ["field" 1374 nil] ["field" 1372 nil] ["expression" "calc" nil]], :limit 10}, :type "query", :parameters [], :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}}, :native nil, :status :failed, :class clojure.lang.ExceptionInfo, :stacktrace ["--> driver.sql.query_processor$fn__63286$update_string_value__63293.invoke(query_processor.clj:1070)" "driver.sql.query_processor$fn__63317.invokeStatic(query_processor.clj:1080)" "driver.sql.query_processor$fn__63317.invoke(query_processor.clj:1078)" "driver.sql.query_processor$fn__63080.invokeStatic(query_processor.clj:825)" "driver.sql.query_processor$fn__63080.invoke(query_processor.clj:819)" "driver.sql.query_processor$fn__62767.invokeStatic(query_processor.clj:522)" "driver.sql.query_processor$fn__62767.invoke(query_processor.clj:519)" "driver.sql.query_processor$as.invokeStatic(query_processor.clj:984)" "driver.sql.query_processor$as.doInvoke(query_processor.clj:953)" "driver.sql.query_processor$fn__63260$iter__63262__63266$fn__63267$fn__63268.invoke(query_processor.clj:1052)" "driver.sql.query_processor$fn__63260$iter__63262__63266$fn__63267.invoke(query_processor.clj:1051)" "driver.sql.query_processor$fn__63260.invokeStatic(query_processor.clj:1051)" "driver.sql.query_processor$fn__63260.invoke(query_processor.clj:1049)" "driver.sql.query_processor$apply_top_level_clauses$fn__63538.invoke(query_processor.clj:1372)" "driver.sql.query_processor$apply_top_level_clauses.invokeStatic(query_processor.clj:1370)" "driver.sql.query_processor$apply_top_level_clauses.invoke(query_processor.clj:1366)" "driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1410)" "driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1400)" "driver.sql.query_processor$apply_source_query.invokeStatic(query_processor.clj:1394)" "driver.sql.query_processor$apply_source_query.invoke(query_processor.clj:1379)" "driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1408)" "driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1400)" "driver.sql.query_processor$mbql__GT_honeysql.invokeStatic(query_processor.clj:1433)" "driver.sql.query_processor$mbql__GT_honeysql.invoke(query_processor.clj:1424)" "driver.sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:1442)" "driver.sql.query_processor$mbql__GT_native.invoke(query_processor.clj:1438)" "driver.sql$fn__101522.invokeStatic(sql.clj:42)" "driver.sql$fn__101522.invoke(sql.clj:40)" "query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:14)" "query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:9)" "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__68063.invoke(mbql_to_native.clj:21)" "query_processor$fn__70691$combined_post_process__70696$combined_post_process_STAR___70697.invoke(query_processor.clj:243)" "query_processor$fn__70691$combined_pre_process__70692$combined_pre_process_STAR___70693.invoke(query_processor.clj:240)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083$fn__69088.invoke(resolve_database_and_driver.clj:36)" "driver$do_with_driver.invokeStatic(driver.clj:90)" "driver$do_with_driver.invoke(driver.clj:86)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083.invoke(resolve_database_and_driver.clj:35)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__64953.invoke(fetch_source_query.clj:310)" "query_processor.middleware.store$initialize_store$fn__65131$fn__65132.invoke(store.clj:12)" "query_processor.store$do_with_store.invokeStatic(store.clj:47)" "query_processor.store$do_with_store.invoke(store.clj:41)" "query_processor.middleware.store$initialize_store$fn__65131.invoke(store.clj:11)" "query_processor.middleware.normalize_query$normalize$fn__69372.invoke(normalize_query.clj:25)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__66309.invoke(constraints.clj:54)" "query_processor.middleware.process_userland_query$process_userland_query$fn__69308.invoke(process_userland_query.clj:150)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__69685.invoke(catch_exceptions.clj:171)" "query_processor.reducible$async_qp$qp_STAR___59455$thunk__59457.invoke(reducible.clj:103)" "query_processor.reducible$async_qp$qp_STAR___59455.invoke(reducible.clj:109)" "query_processor.reducible$async_qp$qp_STAR___59455.invoke(reducible.clj:94)" "query_processor.reducible$sync_qp$qp_STAR___59467.doInvoke(reducible.clj:129)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:362)" "query_processor$process_userland_query.doInvoke(query_processor.clj:358)" "query_processor$fn__70739$process_query_and_save_execution_BANG___70748$fn__70751.invoke(query_processor.clj:373)" "query_processor$fn__70739$process_query_and_save_execution_BANG___70748.invoke(query_processor.clj:366)" "query_processor$fn__70784$process_query_and_save_with_max_results_constraints_BANG___70793$fn__70796.invoke(query_processor.clj:385)" "query_processor$fn__70784$process_query_and_save_with_max_results_constraints_BANG___70793.invoke(query_processor.clj:378)" "api.dataset$run_query_async$fn__86545.invoke(dataset.clj:73)" "query_processor.streaming$streaming_response_STAR_$fn__54305$fn__54306.invoke(streaming.clj:166)" "query_processor.streaming$streaming_response_STAR_$fn__54305.invoke(streaming.clj:165)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:69)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:67)" "async.streaming_response$do_f_async$task__36922.invoke(streaming_response.clj:88)"], :card_id nil, :context :ad-hoc, :error "Input to update-string-value does not match schema: \n\n\t [(named [(named (not (= :value :field)) :value) nil nil] value) nil] \n\n", :row_count 0, :running_time 0, :preprocessed {:database 2, :query {:source-table 15, :expressions {"calc" [:case [[[:contains [:field 1374 nil] [:field 1372 nil]] "yes"]] {:default "no"}]}, :fields [[:field 1376 nil] [:field 1374 nil] [:field 1372 nil] [:expression "calc"]], :limit 10}, :type :query, :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}, :info {:executed-by 2, :context :ad-hoc}}, :ex-data {:type :schema.core/error, :value [[:field 1372 {:metabase.query-processor.util.add-alias-info/source-table 15, :metabase.query-processor.util.add-alias-info/source-alias "given_name", :metabase.query-processor.util.add-alias-info/desired-alias "given_name", :metabase.query-processor.util.add-alias-info/position 2}] #object[metabase.driver.sql.query_processor$fn__63317$fn__63321 0x5d978eb "metabase.driver.sql.query_processor$fn__63317$fn__63321@5d978eb"]], :error [(named [(named (not (= :value :field)) :value) nil nil] value) nil]}, :data {:rows [], :cols []}} ### Information about your Metabase installation ```JSON { "browser-info": { "language": "en-US", "platform": "Win32", "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0", "vendor": "Google Inc." }, "system-info": { "file.encoding": "UTF-8", "java.runtime.name": "OpenJDK Runtime Environment", "java.runtime.version": "11.0.19+7", "java.vendor": "Eclipse Adoptium", "java.vendor.url": "https://adoptium.net/", "java.version": "11.0.19", "java.vm.name": "OpenJDK 64-Bit Server VM", "java.vm.version": "11.0.19+7", "os.name": "Linux", "os.version": "5.10.164.1-1.cm1", "user.language": "en", "user.timezone": "GMT" }, "metabase-info": { "databases": [ "postgres" ], "hosting-env": "unknown", "application-database": "postgres", "application-database-details": { "database": { "name": "PostgreSQL", "version": "11.18" }, "jdbc-driver": { "name": "PostgreSQL JDBC Driver", "version": "42.5.1" } }, "run-mode": "prod", "version": { "date": "2023-04-28", "tag": "v0.46.2", "branch": "release-x.46.x", "hash": "8967c94" }, "settings": { "report-timezone": null } } } ``` ### Severity blocking for non-technical users ### Additional context I found a very tricky workaround to make it work: - Change the expression to `case(contains([Display name], "MYVALUE"), "yes", "no")` - Convert the query to SQL - From this query, replace `LIKE '%MYVALUE%'` with `LIKE CONCAT('%', "public"."users"."given_name", '%')`. - Run it, it works! Basically the trick is to: - use SQL to be able to reference the `given_name` column properly with `"public"."users"."given_name"` - surround the `given_name` with `%` so that it performs a search that match if the string is anywhere in the source column
1.0
Error when using column reference as the second argument of the contains function - ### Describe the bug It looks like Metabase doesn't accepts column reference as a source for the second argument of the `contains` function. When creating a custom column with an expression like `case(contains([Display name], [Given name]), "yes", "no")`, the question returns an error `Input to update-string-value does not match schema:  [(named [(named (not (= :value :field)) :value) nil nil] value) nil] `. ### To Reproduce Fom a sample `users` table 1. Create a question 2. Add a custom column with an expression such as `case(contains([Display name], [Given name]), "yes", "no")` 3. Show the preview 4. Returns an error `Input to update-string-value does not match schema:  [(named [(named (not (= :value :field)) :value) nil nil] value) nil] ` ### Expected behavior The expression should accept referenced columns for both contains parameters ### Logs {:database_id 2, :started_at #t "2023-06-02T10:42:44.943921Z[GMT]", :error_type :invalid-query, :json_query {:database 2, :query {:source-table 15, :expressions {:calc ["case" [[["contains" ["field" 1374 nil] ["field" 1372 nil]] "yes"]] {:default "no"}]}, :fields [["field" 1376 nil] ["field" 1374 nil] ["field" 1372 nil] ["expression" "calc" nil]], :limit 10}, :type "query", :parameters [], :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}}, :native nil, :status :failed, :class clojure.lang.ExceptionInfo, :stacktrace ["--> driver.sql.query_processor$fn__63286$update_string_value__63293.invoke(query_processor.clj:1070)" "driver.sql.query_processor$fn__63317.invokeStatic(query_processor.clj:1080)" "driver.sql.query_processor$fn__63317.invoke(query_processor.clj:1078)" "driver.sql.query_processor$fn__63080.invokeStatic(query_processor.clj:825)" "driver.sql.query_processor$fn__63080.invoke(query_processor.clj:819)" "driver.sql.query_processor$fn__62767.invokeStatic(query_processor.clj:522)" "driver.sql.query_processor$fn__62767.invoke(query_processor.clj:519)" "driver.sql.query_processor$as.invokeStatic(query_processor.clj:984)" "driver.sql.query_processor$as.doInvoke(query_processor.clj:953)" "driver.sql.query_processor$fn__63260$iter__63262__63266$fn__63267$fn__63268.invoke(query_processor.clj:1052)" "driver.sql.query_processor$fn__63260$iter__63262__63266$fn__63267.invoke(query_processor.clj:1051)" "driver.sql.query_processor$fn__63260.invokeStatic(query_processor.clj:1051)" "driver.sql.query_processor$fn__63260.invoke(query_processor.clj:1049)" "driver.sql.query_processor$apply_top_level_clauses$fn__63538.invoke(query_processor.clj:1372)" "driver.sql.query_processor$apply_top_level_clauses.invokeStatic(query_processor.clj:1370)" "driver.sql.query_processor$apply_top_level_clauses.invoke(query_processor.clj:1366)" "driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1410)" "driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1400)" "driver.sql.query_processor$apply_source_query.invokeStatic(query_processor.clj:1394)" "driver.sql.query_processor$apply_source_query.invoke(query_processor.clj:1379)" "driver.sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:1408)" "driver.sql.query_processor$apply_clauses.invoke(query_processor.clj:1400)" "driver.sql.query_processor$mbql__GT_honeysql.invokeStatic(query_processor.clj:1433)" "driver.sql.query_processor$mbql__GT_honeysql.invoke(query_processor.clj:1424)" "driver.sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:1442)" "driver.sql.query_processor$mbql__GT_native.invoke(query_processor.clj:1438)" "driver.sql$fn__101522.invokeStatic(sql.clj:42)" "driver.sql$fn__101522.invoke(sql.clj:40)" "query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:14)" "query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:9)" "query_processor.middleware.mbql_to_native$mbql__GT_native$fn__68063.invoke(mbql_to_native.clj:21)" "query_processor$fn__70691$combined_post_process__70696$combined_post_process_STAR___70697.invoke(query_processor.clj:243)" "query_processor$fn__70691$combined_pre_process__70692$combined_pre_process_STAR___70693.invoke(query_processor.clj:240)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083$fn__69088.invoke(resolve_database_and_driver.clj:36)" "driver$do_with_driver.invokeStatic(driver.clj:90)" "driver$do_with_driver.invoke(driver.clj:86)" "query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__69083.invoke(resolve_database_and_driver.clj:35)" "query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__64953.invoke(fetch_source_query.clj:310)" "query_processor.middleware.store$initialize_store$fn__65131$fn__65132.invoke(store.clj:12)" "query_processor.store$do_with_store.invokeStatic(store.clj:47)" "query_processor.store$do_with_store.invoke(store.clj:41)" "query_processor.middleware.store$initialize_store$fn__65131.invoke(store.clj:11)" "query_processor.middleware.normalize_query$normalize$fn__69372.invoke(normalize_query.clj:25)" "query_processor.middleware.constraints$add_default_userland_constraints$fn__66309.invoke(constraints.clj:54)" "query_processor.middleware.process_userland_query$process_userland_query$fn__69308.invoke(process_userland_query.clj:150)" "query_processor.middleware.catch_exceptions$catch_exceptions$fn__69685.invoke(catch_exceptions.clj:171)" "query_processor.reducible$async_qp$qp_STAR___59455$thunk__59457.invoke(reducible.clj:103)" "query_processor.reducible$async_qp$qp_STAR___59455.invoke(reducible.clj:109)" "query_processor.reducible$async_qp$qp_STAR___59455.invoke(reducible.clj:94)" "query_processor.reducible$sync_qp$qp_STAR___59467.doInvoke(reducible.clj:129)" "query_processor$process_userland_query.invokeStatic(query_processor.clj:362)" "query_processor$process_userland_query.doInvoke(query_processor.clj:358)" "query_processor$fn__70739$process_query_and_save_execution_BANG___70748$fn__70751.invoke(query_processor.clj:373)" "query_processor$fn__70739$process_query_and_save_execution_BANG___70748.invoke(query_processor.clj:366)" "query_processor$fn__70784$process_query_and_save_with_max_results_constraints_BANG___70793$fn__70796.invoke(query_processor.clj:385)" "query_processor$fn__70784$process_query_and_save_with_max_results_constraints_BANG___70793.invoke(query_processor.clj:378)" "api.dataset$run_query_async$fn__86545.invoke(dataset.clj:73)" "query_processor.streaming$streaming_response_STAR_$fn__54305$fn__54306.invoke(streaming.clj:166)" "query_processor.streaming$streaming_response_STAR_$fn__54305.invoke(streaming.clj:165)" "async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:69)" "async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:67)" "async.streaming_response$do_f_async$task__36922.invoke(streaming_response.clj:88)"], :card_id nil, :context :ad-hoc, :error "Input to update-string-value does not match schema: \n\n\t [(named [(named (not (= :value :field)) :value) nil nil] value) nil] \n\n", :row_count 0, :running_time 0, :preprocessed {:database 2, :query {:source-table 15, :expressions {"calc" [:case [[[:contains [:field 1374 nil] [:field 1372 nil]] "yes"]] {:default "no"}]}, :fields [[:field 1376 nil] [:field 1374 nil] [:field 1372 nil] [:expression "calc"]], :limit 10}, :type :query, :middleware {:js-int-to-string? true, :add-default-userland-constraints? true}, :info {:executed-by 2, :context :ad-hoc}}, :ex-data {:type :schema.core/error, :value [[:field 1372 {:metabase.query-processor.util.add-alias-info/source-table 15, :metabase.query-processor.util.add-alias-info/source-alias "given_name", :metabase.query-processor.util.add-alias-info/desired-alias "given_name", :metabase.query-processor.util.add-alias-info/position 2}] #object[metabase.driver.sql.query_processor$fn__63317$fn__63321 0x5d978eb "metabase.driver.sql.query_processor$fn__63317$fn__63321@5d978eb"]], :error [(named [(named (not (= :value :field)) :value) nil nil] value) nil]}, :data {:rows [], :cols []}} ### Information about your Metabase installation ```JSON { "browser-info": { "language": "en-US", "platform": "Win32", "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0", "vendor": "Google Inc." }, "system-info": { "file.encoding": "UTF-8", "java.runtime.name": "OpenJDK Runtime Environment", "java.runtime.version": "11.0.19+7", "java.vendor": "Eclipse Adoptium", "java.vendor.url": "https://adoptium.net/", "java.version": "11.0.19", "java.vm.name": "OpenJDK 64-Bit Server VM", "java.vm.version": "11.0.19+7", "os.name": "Linux", "os.version": "5.10.164.1-1.cm1", "user.language": "en", "user.timezone": "GMT" }, "metabase-info": { "databases": [ "postgres" ], "hosting-env": "unknown", "application-database": "postgres", "application-database-details": { "database": { "name": "PostgreSQL", "version": "11.18" }, "jdbc-driver": { "name": "PostgreSQL JDBC Driver", "version": "42.5.1" } }, "run-mode": "prod", "version": { "date": "2023-04-28", "tag": "v0.46.2", "branch": "release-x.46.x", "hash": "8967c94" }, "settings": { "report-timezone": null } } } ``` ### Severity blocking for non-technical users ### Additional context I found a very tricky workaround to make it work: - Change the expression to `case(contains([Display name], "MYVALUE"), "yes", "no")` - Convert the query to SQL - From this query, replace `LIKE '%MYVALUE%'` with `LIKE CONCAT('%', "public"."users"."given_name", '%')`. - Run it, it works! Basically the trick is to: - use SQL to be able to reference the `given_name` column properly with `"public"."users"."given_name"` - surround the `given_name` with `%` so that it performs a search that match if the string is anywhere in the source column
non_main
error when using column reference as the second argument of the contains function describe the bug it looks like metabase doesn t accepts column reference as a source for the second argument of the contains function when creating a custom column with an expression like case contains yes no the question returns an error input to update string value does not match schema  value nil  to reproduce fom a sample users table create a question add a custom column with an expression such as case contains yes no show the preview returns an error input to update string value does not match schema  value nil  expected behavior the expression should accept referenced columns for both contains parameters logs database id started at t error type invalid query json query database query source table expressions calc yes default no fields limit type query parameters middleware js int to string true add default userland constraints true native nil status failed class clojure lang exceptioninfo stacktrace driver sql query processor fn update string value invoke query processor clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor as invokestatic query processor clj driver sql query processor as doinvoke query processor clj driver sql query processor fn iter fn fn invoke query processor clj driver sql query processor fn iter fn invoke query processor clj driver sql query processor fn invokestatic query processor clj driver sql query processor fn invoke query processor clj driver sql query processor apply top level clauses fn invoke query processor clj driver sql query processor apply top level clauses invokestatic query processor clj driver sql query processor apply top level clauses invoke query processor clj driver sql query processor apply clauses invokestatic query processor clj driver sql query processor apply clauses invoke query processor clj driver sql query processor apply source query invokestatic query processor clj driver sql query processor apply source query invoke query processor clj driver sql query processor apply clauses invokestatic query processor clj driver sql query processor apply clauses invoke query processor clj driver sql query processor mbql gt honeysql invokestatic query processor clj driver sql query processor mbql gt honeysql invoke query processor clj driver sql query processor mbql gt native invokestatic query processor clj driver sql query processor mbql gt native invoke query processor clj driver sql fn invokestatic sql clj driver sql fn invoke sql clj query processor middleware mbql to native query gt native form invokestatic mbql to native clj query processor middleware mbql to native query gt native form invoke mbql to native clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor fn combined post process combined post process star invoke query processor clj query processor fn combined pre process combined pre process star invoke query processor clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star doinvoke reducible clj query processor process userland query invokestatic query processor clj query processor process userland query doinvoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max results constraints bang fn invoke query processor clj query processor fn process query and save with max results constraints bang invoke query processor clj api dataset run query async fn invoke dataset clj query processor streaming streaming response star fn fn invoke streaming clj query processor streaming streaming response star fn invoke streaming clj async streaming response do f star invokestatic streaming response clj async streaming response do f star invoke streaming response clj async streaming response do f async task invoke streaming response clj card id nil context ad hoc error input to update string value does not match schema n n t value nil n n row count running time preprocessed database query source table expressions calc yes default no fields limit type query middleware js int to string true add default userland constraints true info executed by context ad hoc ex data type schema core error value field metabase query processor util add alias info source table metabase query processor util add alias info source alias given name metabase query processor util add alias info desired alias given name metabase query processor util add alias info position object error value nil data rows cols information about your metabase installation json browser info language en us platform useragent mozilla windows nt applewebkit khtml like gecko chrome safari edg vendor google inc system info file encoding utf java runtime name openjdk runtime environment java runtime version java vendor eclipse adoptium java vendor url java version java vm name openjdk bit server vm java vm version os name linux os version user language en user timezone gmt metabase info databases postgres hosting env unknown application database postgres application database details database name postgresql version jdbc driver name postgresql jdbc driver version run mode prod version date tag branch release x x hash settings report timezone null severity blocking for non technical users additional context i found a very tricky workaround to make it work change the expression to case contains myvalue yes no convert the query to sql from this query replace like myvalue with like concat public users given name run it it works basically the trick is to use sql to be able to reference the given name column properly with public users given name surround the given name with so that it performs a search that match if the string is anywhere in the source column
0
628
4,146,924,618
IssuesEvent
2016-06-15 03:18:18
Microsoft/DirectXTex
https://api.github.com/repos/Microsoft/DirectXTex
closed
Remove VS 2012 adapter code
maintainence
As part of dropping VS 2012 projects, can clean up the following code: * Remove C4005 disable for ``stdint.h`` (workaround for bug with VS 2010 + Windows 7 SDK) * Remove C4481 disable for "override is an extension" (workaround for VS 2010 bug) * Remove ``DIRECTX_STD_CALLCONV`` std::function workaround for VS 2012 * Remove ``DIRECTX_CTOR_DEFAULT`` / ``DIRECTX_CTOR_DELETE`` macros and just use =default, =delete directly (VS 2013 or later supports this) * Remove DirectXMath 3.03 adapters for 3.06 constructs (workaround for Windows 8.0 SDK) * Make use of ``std::make_unique<>`` (C++14 draft feature supported in VS 2013) * Remove some guarded code patterns for Windows XP (i.e. functions that were added to Windows Vista) * Make consistent use of ``= {}`` to initialize memory to zero (C++11 brace init behavior fixed in VS 2013) * Remove legacy ``WCHAR`` Win32 type and use ``wchar_t``
True
Remove VS 2012 adapter code - As part of dropping VS 2012 projects, can clean up the following code: * Remove C4005 disable for ``stdint.h`` (workaround for bug with VS 2010 + Windows 7 SDK) * Remove C4481 disable for "override is an extension" (workaround for VS 2010 bug) * Remove ``DIRECTX_STD_CALLCONV`` std::function workaround for VS 2012 * Remove ``DIRECTX_CTOR_DEFAULT`` / ``DIRECTX_CTOR_DELETE`` macros and just use =default, =delete directly (VS 2013 or later supports this) * Remove DirectXMath 3.03 adapters for 3.06 constructs (workaround for Windows 8.0 SDK) * Make use of ``std::make_unique<>`` (C++14 draft feature supported in VS 2013) * Remove some guarded code patterns for Windows XP (i.e. functions that were added to Windows Vista) * Make consistent use of ``= {}`` to initialize memory to zero (C++11 brace init behavior fixed in VS 2013) * Remove legacy ``WCHAR`` Win32 type and use ``wchar_t``
main
remove vs adapter code as part of dropping vs projects can clean up the following code remove disable for stdint h workaround for bug with vs windows sdk remove disable for override is an extension workaround for vs bug remove directx std callconv std function workaround for vs remove directx ctor default directx ctor delete macros and just use default delete directly vs or later supports this remove directxmath adapters for constructs workaround for windows sdk make use of std make unique c draft feature supported in vs remove some guarded code patterns for windows xp i e functions that were added to windows vista make consistent use of to initialize memory to zero c brace init behavior fixed in vs remove legacy wchar type and use wchar t
1
183,632
6,690,084,511
IssuesEvent
2017-10-09 07:31:45
nonnymoose/xsr
https://api.github.com/repos/nonnymoose/xsr
opened
Log captured events
enhancement low priority wishful thinking
Print an informational message, when something is captured. It should be at verbosity level 3
1.0
Log captured events - Print an informational message, when something is captured. It should be at verbosity level 3
non_main
log captured events print an informational message when something is captured it should be at verbosity level
0
2,824
10,131,342,064
IssuesEvent
2019-08-01 19:17:02
ICPI/OCM
https://api.github.com/repos/ICPI/OCM
opened
ICPI Membership Agreement
Audience: Field Audience: HQ Type: Admin Type: Maintain Type: SOP/Guidance/Plan
Survey sent to all ICPI cluster/team members ______, also circulated on ICPI Inbrief for those interested in becoming involved. Responses by cluster can be found on [SharePoint](https://www.pepfar.net/OGAC-HQ/icpi/Shared%20Documents/Forms/AllItems.aspx?RootFolder=%2FOGAC%2DHQ%2Ficpi%2FShared%20Documents%2FClusters%2FOCM%20Team%2FResources&FolderCTID=0x01200080CC6F83D1766F4D9E3497D4529EC74A&View=%7B94C838B2%2DE166%2D4122%2DB8B4%2D7BEB9E1BC12B%7D) and are also included in the Cluster Membership Tab of the [ICPI Master Contact List](https://www.pepfar.net/OGAC-HQ/icpi/Shared%20Documents/Forms/AllItems.aspx?RootFolder=%2FOGAC%2DHQ%2Ficpi%2FShared%20Documents%2FCommunications%2FMaster%20ICPI%20Contact%20List&FolderCTID=0x012000C815322C717A7E4B8164EA374FA254EC002682B939F9BED347BD49E43D77D3C691&View=%7B94C838B2%2DE166%2D4122%2DB8B4%2D7BEB9E1BC12B%7D). Plans to implement LoE in next iteration of Membership agreement (most likely to be done annualy).
True
ICPI Membership Agreement - Survey sent to all ICPI cluster/team members ______, also circulated on ICPI Inbrief for those interested in becoming involved. Responses by cluster can be found on [SharePoint](https://www.pepfar.net/OGAC-HQ/icpi/Shared%20Documents/Forms/AllItems.aspx?RootFolder=%2FOGAC%2DHQ%2Ficpi%2FShared%20Documents%2FClusters%2FOCM%20Team%2FResources&FolderCTID=0x01200080CC6F83D1766F4D9E3497D4529EC74A&View=%7B94C838B2%2DE166%2D4122%2DB8B4%2D7BEB9E1BC12B%7D) and are also included in the Cluster Membership Tab of the [ICPI Master Contact List](https://www.pepfar.net/OGAC-HQ/icpi/Shared%20Documents/Forms/AllItems.aspx?RootFolder=%2FOGAC%2DHQ%2Ficpi%2FShared%20Documents%2FCommunications%2FMaster%20ICPI%20Contact%20List&FolderCTID=0x012000C815322C717A7E4B8164EA374FA254EC002682B939F9BED347BD49E43D77D3C691&View=%7B94C838B2%2DE166%2D4122%2DB8B4%2D7BEB9E1BC12B%7D). Plans to implement LoE in next iteration of Membership agreement (most likely to be done annualy).
main
icpi membership agreement survey sent to all icpi cluster team members also circulated on icpi inbrief for those interested in becoming involved responses by cluster can be found on and are also included in the cluster membership tab of the plans to implement loe in next iteration of membership agreement most likely to be done annualy
1
1,712
6,574,459,614
IssuesEvent
2017-09-11 12:58:38
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Service Module 'enable' does not work as expected with SysV scripts on systemd Systems
affects_2.2 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Service Module ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION ``` [defaults] gathering = smart host_key_checking = False inventory = /etc/ansible/hosts library = /usr/share/ansible log_path = /var/log/ansible/ansible.log retry_files_enabled = False stdout_callback = skippy [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=1800s transport = ssh ``` ##### OS / ENVIRONMENT Ansible Controller (AMZN Linux): ``` Linux ip-10-27-0-198 4.4.23-31.54.amzn1.x86_64 #1 SMP Tue Oct 18 22:02:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` Remote Target (RHEL7): ``` Linux ip-10-27-5-86.a730491757039.amazonaws.com 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY When using the Service module against a RHEL7 instance, it appears the service module is not working as intended when using SysV scripts. This was working in 2.1.2.0. It produces a strange behaviour, in that if the service exists in `/etc/init.d/$SERVICE_NAME` but not in a `chkconfig --list` output, it will fail to enable the service, as it cannot find it by the name. See actual results below for details analysis. To note, this works fine on Amazon Linux (which uses SysV by default). ##### STEPS TO REPRODUCE - Place a SysV init script in /etc/init.d - Run the service module with the service name, and `enabled=yes` ##### EXPECTED RESULTS It is expected the SysV script should be enabled. ##### ACTUAL RESULTS # On the Target instance ``` [root@ip-10-27-5-86 ~]# ls -la /etc/init.d/ total 60 drwxr-xr-x. 2 root root 4096 Nov 3 18:21 . drwxr-xr-x. 10 root root 4096 Nov 3 18:14 .. -rwxr-xr-x. 1 root root 318 Aug 21 2015 choose_repo -rw-r--r--. 1 root root 15131 Sep 12 06:47 functions -rwxrwxr-x. 1 root root 5056 Nov 3 18:21 logstash -rwxr-xr-x. 1 root root 2989 Sep 12 06:47 netconsole -rwxr-xr-x. 1 root root 6643 Sep 12 06:47 network -rw-r--r--. 1 root root 1160 Oct 7 09:56 README -rwxr-xr-x. 1 root root 1868 Aug 21 2015 rh-cloud-firstboot -rwxr-xr-x. 1 root root 2437 Jun 26 2015 rhns [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ``` # From the Controller Try to enable the service ``` [A730491757039\joeskyyy@ip-10-27-0-198 mc_logs]$ ansible -i ~/hosts all -m service -a "name=logstash enabled=yes" -vvvv --become Using /etc/ansible/ansible.cfg as config file Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162 `" && echo ansible-tmp-1478215841.03-177671857836162="` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162 `" ) && sleep 0'"'"'' <10.27.5.77> PUT /tmp/tmpjS20Hc TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py <10.27.5.77> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.77]' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py && sleep 0'"'"'' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.77 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uarnsjvhnpksvmndiulirdbysmztqcrt; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578 `" && echo ansible-tmp-1478215841.49-3431261890578="` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578 `" ) && sleep 0'"'"'' <10.27.5.77> PUT /tmp/tmpmO92Mk TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py <10.27.5.77> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.77]' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py && sleep 0'"'"'' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.77 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ltdddlwqyhmziebpurcpqjdscqyerinz; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' 10.27.5.77 | FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "daemon_reload": false, "enabled": true, "masked": null, "name": "logstash", "state": null, "user": false } }, "msg": "Could not find the requested service \"'logstash'\": " } ``` # On the Target Instance, run chkconfig logstash on, disable it, so we can re-run the service module ``` [root@ip-10-27-5-86 ~]# chkconfig logstash on [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off logstash 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@ip-10-27-5-86 ~]# chkconfig logstash off [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off logstash 0:off 1:off 2:off 3:off 4:off 5:off 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ``` # On the Controller, run the service module ``` [joeskyyy@ip-10-27-0-198 mc_logs]$ ansible -i ~/hosts all -m service -a "name=logstash enabled=yes" -vvvv --become Using /etc/ansible/ansible.cfg as config file Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209 `" && echo ansible-tmp-1478214073.21-133810435447209="` echo $HOME/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209 `" ) && sleep 0'"'"'' <10.27.5.86> PUT /tmp/tmpPM0eIY TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py <10.27.5.86> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.86]' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py && sleep 0'"'"'' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.86 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-dpsldcejhnvsmslcmfjivastwsjrebmu; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003 `" && echo ansible-tmp-1478214082.63-24802563064003="` echo $HOME/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003 `" ) && sleep 0'"'"'' <10.27.5.86> PUT /tmp/tmpxeCT1Q TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py <10.27.5.86> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.86]' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py && sleep 0'"'"'' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.86 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-hvoafitpgnzzmujuqatnpatrpfpvuhim; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' 10.27.5.86 | SUCCESS => { "changed": true, "enabled": true, "invocation": { "module_args": { "daemon_reload": false, "enabled": true, "masked": null, "name": "logstash", "state": null, "user": false } }, "name": "logstash", "status": { "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket remote-fs.target system.slice basic.target", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "LSB: Starts Logstash as a daemon.", "DevicePolicy": "auto", "Documentation": "man:systemd-sysv-generator(8)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/etc/rc.d/init.d/logstash ; argv[]=/etc/rc.d/init.d/logstash start ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/etc/rc.d/init.d/logstash ; argv[]=/etc/rc.d/init.d/logstash stop ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/run/systemd/generator.late/logstash.service", "GuessMainPID": "no", "IOScheduling": "0", "Id": "logstash.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "no", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "31146", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "31146", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "logstash.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "yes", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "SourcePath": "/etc/rc.d/init.d/logstash", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "5min", "TimerSlackNSec": "50000", "Transient": "no", "Type": "forking", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "bad", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } ```
True
Service Module 'enable' does not work as expected with SysV scripts on systemd Systems - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Service Module ##### ANSIBLE VERSION ``` ansible --version ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION ``` [defaults] gathering = smart host_key_checking = False inventory = /etc/ansible/hosts library = /usr/share/ansible log_path = /var/log/ansible/ansible.log retry_files_enabled = False stdout_callback = skippy [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=1800s transport = ssh ``` ##### OS / ENVIRONMENT Ansible Controller (AMZN Linux): ``` Linux ip-10-27-0-198 4.4.23-31.54.amzn1.x86_64 #1 SMP Tue Oct 18 22:02:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ``` Remote Target (RHEL7): ``` Linux ip-10-27-5-86.a730491757039.amazonaws.com 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux ``` ##### SUMMARY When using the Service module against a RHEL7 instance, it appears the service module is not working as intended when using SysV scripts. This was working in 2.1.2.0. It produces a strange behaviour, in that if the service exists in `/etc/init.d/$SERVICE_NAME` but not in a `chkconfig --list` output, it will fail to enable the service, as it cannot find it by the name. See actual results below for details analysis. To note, this works fine on Amazon Linux (which uses SysV by default). ##### STEPS TO REPRODUCE - Place a SysV init script in /etc/init.d - Run the service module with the service name, and `enabled=yes` ##### EXPECTED RESULTS It is expected the SysV script should be enabled. ##### ACTUAL RESULTS # On the Target instance ``` [root@ip-10-27-5-86 ~]# ls -la /etc/init.d/ total 60 drwxr-xr-x. 2 root root 4096 Nov 3 18:21 . drwxr-xr-x. 10 root root 4096 Nov 3 18:14 .. -rwxr-xr-x. 1 root root 318 Aug 21 2015 choose_repo -rw-r--r--. 1 root root 15131 Sep 12 06:47 functions -rwxrwxr-x. 1 root root 5056 Nov 3 18:21 logstash -rwxr-xr-x. 1 root root 2989 Sep 12 06:47 netconsole -rwxr-xr-x. 1 root root 6643 Sep 12 06:47 network -rw-r--r--. 1 root root 1160 Oct 7 09:56 README -rwxr-xr-x. 1 root root 1868 Aug 21 2015 rh-cloud-firstboot -rwxr-xr-x. 1 root root 2437 Jun 26 2015 rhns [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ``` # From the Controller Try to enable the service ``` [A730491757039\joeskyyy@ip-10-27-0-198 mc_logs]$ ansible -i ~/hosts all -m service -a "name=logstash enabled=yes" -vvvv --become Using /etc/ansible/ansible.cfg as config file Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162 `" && echo ansible-tmp-1478215841.03-177671857836162="` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162 `" ) && sleep 0'"'"'' <10.27.5.77> PUT /tmp/tmpjS20Hc TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py <10.27.5.77> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.77]' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py && sleep 0'"'"'' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.77 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uarnsjvhnpksvmndiulirdbysmztqcrt; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/setup.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.03-177671857836162/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578 `" && echo ansible-tmp-1478215841.49-3431261890578="` echo $HOME/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578 `" ) && sleep 0'"'"'' <10.27.5.77> PUT /tmp/tmpmO92Mk TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py <10.27.5.77> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.77]' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.77 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py && sleep 0'"'"'' <10.27.5.77> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.77> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.77 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ltdddlwqyhmziebpurcpqjdscqyerinz; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/systemd.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1478215841.49-3431261890578/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' 10.27.5.77 | FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "daemon_reload": false, "enabled": true, "masked": null, "name": "logstash", "state": null, "user": false } }, "msg": "Could not find the requested service \"'logstash'\": " } ``` # On the Target Instance, run chkconfig logstash on, disable it, so we can re-run the service module ``` [root@ip-10-27-5-86 ~]# chkconfig logstash on [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off logstash 0:off 1:off 2:on 3:on 4:on 5:on 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@ip-10-27-5-86 ~]# chkconfig logstash off [root@ip-10-27-5-86 ~]# chkconfig --list Note: This output shows SysV services only and does not include native systemd services. SysV configuration data might be overridden by native systemd configuration. If you want to list systemd services use 'systemctl list-unit-files'. To see services enabled on particular target use 'systemctl list-dependencies [target]'. choose_repo 0:off 1:off 2:on 3:on 4:on 5:on 6:off logstash 0:off 1:off 2:off 3:off 4:off 5:off 6:off netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off network 0:off 1:off 2:on 3:on 4:on 5:on 6:off rh-cloud-firstboot 0:off 1:off 2:off 3:off 4:off 5:off 6:off rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off ``` # On the Controller, run the service module ``` [joeskyyy@ip-10-27-0-198 mc_logs]$ ansible -i ~/hosts all -m service -a "name=logstash enabled=yes" -vvvv --become Using /etc/ansible/ansible.cfg as config file Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209 `" && echo ansible-tmp-1478214073.21-133810435447209="` echo $HOME/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209 `" ) && sleep 0'"'"'' <10.27.5.86> PUT /tmp/tmpPM0eIY TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py <10.27.5.86> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.86]' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py && sleep 0'"'"'' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.86 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-dpsldcejhnvsmslcmfjivastwsjrebmu; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/setup.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1478214073.21-133810435447209/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003 `" && echo ansible-tmp-1478214082.63-24802563064003="` echo $HOME/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003 `" ) && sleep 0'"'"'' <10.27.5.86> PUT /tmp/tmpxeCT1Q TO /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py <10.27.5.86> SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r '[10.27.5.86]' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r 10.27.5.86 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/ /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py && sleep 0'"'"'' <10.27.5.86> ESTABLISH SSH CONNECTION FOR USER: ec2-user <10.27.5.86> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=1800s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/joeskyyy/.ssh/joeskyyy.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/home/joeskyyy/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.27.5.86 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-hvoafitpgnzzmujuqatnpatrpfpvuhim; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/systemd.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1478214082.63-24802563064003/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' 10.27.5.86 | SUCCESS => { "changed": true, "enabled": true, "invocation": { "module_args": { "daemon_reload": false, "enabled": true, "masked": null, "name": "logstash", "state": null, "user": false } }, "name": "logstash", "status": { "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket remote-fs.target system.slice basic.target", "AllowIsolate": "no", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "LSB: Starts Logstash as a daemon.", "DevicePolicy": "auto", "Documentation": "man:systemd-sysv-generator(8)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/etc/rc.d/init.d/logstash ; argv[]=/etc/rc.d/init.d/logstash start ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStop": "{ path=/etc/rc.d/init.d/logstash ; argv[]=/etc/rc.d/init.d/logstash stop ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/run/systemd/generator.late/logstash.service", "GuessMainPID": "no", "IOScheduling": "0", "Id": "logstash.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "no", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "31146", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "31146", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "logstash.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "yes", "Requires": "basic.target", "Restart": "no", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "SourcePath": "/etc/rc.d/init.d/logstash", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TimeoutStartUSec": "5min", "TimeoutStopUSec": "5min", "TimerSlackNSec": "50000", "Transient": "no", "Type": "forking", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "bad", "Wants": "system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } ```
main
service module enable does not work as expected with sysv scripts on systemd systems issue type bug report component name service module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path configuration gathering smart host key checking false inventory etc ansible hosts library usr share ansible log path var log ansible ansible log retry files enabled false stdout callback skippy ssh args o controlmaster auto o controlpersist transport ssh os environment ansible controller amzn linux linux ip smp tue oct utc gnu linux remote target linux ip amazonaws com smp thu oct edt gnu linux summary when using the service module against a instance it appears the service module is not working as intended when using sysv scripts this was working in it produces a strange behaviour in that if the service exists in etc init d service name but not in a chkconfig list output it will fail to enable the service as it cannot find it by the name see actual results below for details analysis to note this works fine on amazon linux which uses sysv by default steps to reproduce place a sysv init script in etc init d run the service module with the service name and enabled yes expected results it is expected the sysv script should be enabled actual results on the target instance ls la etc init d total drwxr xr x root root nov drwxr xr x root root nov rwxr xr x root root aug choose repo rw r r root root sep functions rwxrwxr x root root nov logstash rwxr xr x root root sep netconsole rwxr xr x root root sep network rw r r root root oct readme rwxr xr x root root aug rh cloud firstboot rwxr xr x root root jun rhns chkconfig list note this output shows sysv services only and does not include native systemd services sysv configuration data might be overridden by native systemd configuration if you want to list systemd services use systemctl list unit files to see services enabled on particular target use systemctl list dependencies choose repo off off on on on on off netconsole off off off off off off off network off off on on on on off rh cloud firstboot off off off off off off off rhnsd off off on on on on off from the controller try to enable the service ansible i hosts all m service a name logstash enabled yes vvvv become using etc ansible ansible cfg as config file loading callback plugin minimal of type stdout from usr local lib site packages ansible plugins callback init pyc using module file usr local lib site packages ansible modules core system setup py establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home user ansible tmp ansible tmp setup py ssh exec sftp b vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c chmod u x home user ansible tmp ansible tmp home user ansible tmp ansible tmp setup py sleep establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success uarnsjvhnpksvmndiulirdbysmztqcrt usr bin python home user ansible tmp ansible tmp setup py rm rf home user ansible tmp ansible tmp dev null sleep running systemd using module file usr local lib site packages ansible modules core system systemd py establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home user ansible tmp ansible tmp systemd py ssh exec sftp b vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c chmod u x home user ansible tmp ansible tmp home user ansible tmp ansible tmp systemd py sleep establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success ltdddlwqyhmziebpurcpqjdscqyerinz usr bin python home user ansible tmp ansible tmp systemd py rm rf home user ansible tmp ansible tmp dev null sleep failed changed false failed true invocation module args daemon reload false enabled true masked null name logstash state null user false msg could not find the requested service logstash on the target instance run chkconfig logstash on disable it so we can re run the service module chkconfig logstash on chkconfig list note this output shows sysv services only and does not include native systemd services sysv configuration data might be overridden by native systemd configuration if you want to list systemd services use systemctl list unit files to see services enabled on particular target use systemctl list dependencies choose repo off off on on on on off logstash off off on on on on off netconsole off off off off off off off network off off on on on on off rh cloud firstboot off off off off off off off rhnsd off off on on on on off chkconfig logstash off chkconfig list note this output shows sysv services only and does not include native systemd services sysv configuration data might be overridden by native systemd configuration if you want to list systemd services use systemctl list unit files to see services enabled on particular target use systemctl list dependencies choose repo off off on on on on off logstash off off off off off off off netconsole off off off off off off off network off off on on on on off rh cloud firstboot off off off off off off off rhnsd off off on on on on off on the controller run the service module ansible i hosts all m service a name logstash enabled yes vvvv become using etc ansible ansible cfg as config file loading callback plugin minimal of type stdout from usr local lib site packages ansible plugins callback init pyc using module file usr local lib site packages ansible modules core system setup py establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home user ansible tmp ansible tmp setup py ssh exec sftp b vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c chmod u x home user ansible tmp ansible tmp home user ansible tmp ansible tmp setup py sleep establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success dpsldcejhnvsmslcmfjivastwsjrebmu usr bin python home user ansible tmp ansible tmp setup py rm rf home user ansible tmp ansible tmp dev null sleep running systemd using module file usr local lib site packages ansible modules core system systemd py establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home user ansible tmp ansible tmp systemd py ssh exec sftp b vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r bin sh c chmod u x home user ansible tmp ansible tmp home user ansible tmp ansible tmp systemd py sleep establish ssh connection for user user ssh exec ssh vvv o controlmaster auto o controlpersist o stricthostkeychecking no o identityfile home joeskyyy ssh joeskyyy pem o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user user o connecttimeout o controlpath home joeskyyy ansible cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success hvoafitpgnzzmujuqatnpatrpfpvuhim usr bin python home user ansible tmp ansible tmp systemd py rm rf home user ansible tmp ansible tmp dev null sleep success changed true enabled true invocation module args daemon reload false enabled true masked null name logstash state null user false name logstash status activeentertimestampmonotonic activeexittimestampmonotonic activestate inactive after systemd journald socket remote fs target system slice basic target allowisolate no assertresult no asserttimestampmonotonic before shutdown target blockioaccounting no blockioweight cpuaccounting no cpuquotapersecusec infinity cpuschedulingpolicy cpuschedulingpriority cpuschedulingresetonfork no cpushares canisolate no canreload no canstart yes canstop yes capabilityboundingset conditionresult no conditiontimestampmonotonic conflicts shutdown target controlpid defaultdependencies yes delegate no description lsb starts logstash as a daemon devicepolicy auto documentation man systemd sysv generator execmaincode execmainexittimestampmonotonic execmainpid execmainstarttimestampmonotonic execmainstatus execstart path etc rc d init d logstash argv etc rc d init d logstash start ignore errors no start time stop time pid code null status execstop path etc rc d init d logstash argv etc rc d init d logstash stop ignore errors no start time stop time pid code null status failureaction none filedescriptorstoremax fragmentpath run systemd generator late logstash service guessmainpid no ioscheduling id logstash service ignoreonisolate no ignoreonsnapshot no ignoresigpipe no inactiveentertimestampmonotonic inactiveexittimestampmonotonic jobtimeoutaction none jobtimeoutusec killmode process killsignal limitas limitcore limitcpu limitdata limitfsize limitlocks limitmemlock limitmsgqueue limitnice limitnofile limitnproc limitrss limitrtprio limitrttime limitsigpending limitstack loadstate loaded mainpid memoryaccounting no memorycurrent memorylimit mountflags names logstash service needdaemonreload no nice nonewprivileges no nonblocking no notifyaccess none oomscoreadjust onfailurejobmode replace permissionsstartonly no privatedevices no privatenetwork no privatetmp no protecthome no protectsystem no refusemanualstart no refusemanualstop no remainafterexit yes requires basic target restart no restartusec result success rootdirectorystartonly no runtimedirectorymode sameprocessgroup no securebits sendsighup no sendsigkill yes slice system slice sourcepath etc rc d init d logstash standarderror inherit standardinput null standardoutput journal startlimitaction none startlimitburst startlimitinterval startupblockioweight startupcpushares statuserrno stopwhenunneeded no substate dead sysloglevelprefix yes syslogpriority systemcallerrornumber ttyreset no ttyvhangup no ttyvtdisallocate no timeoutstartusec timeoutstopusec timerslacknsec transient no type forking umask unitfilepreset disabled unitfilestate bad wants system slice watchdogtimestampmonotonic watchdogusec
1
3,244
12,368,707,008
IssuesEvent
2020-05-18 14:13:32
Kashdeya/Tiny-Progressions
https://api.github.com/repos/Kashdeya/Tiny-Progressions
closed
[1.12.2] Server crash with Wub Hammer
Version not Maintainted
Hello! One of our players crashed the server by using the Wub Hammer. Here is the crash report: https://pastebin.com/DZAMjw23
True
[1.12.2] Server crash with Wub Hammer - Hello! One of our players crashed the server by using the Wub Hammer. Here is the crash report: https://pastebin.com/DZAMjw23
main
server crash with wub hammer hello one of our players crashed the server by using the wub hammer here is the crash report
1
964
4,707,810,914
IssuesEvent
2016-10-13 21:15:39
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
include_role does not work with with_items
affects_2.3 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` 2.3.0 ``` ##### SUMMARY The new "include_role" task does not work with with_items ##### STEPS TO REPRODUCE In the example playbook below, the role "myrole" is first run stand-alone, and then as a with_items loop. ``` --- - name: Do stuff to stuff hosts: localhost tasks: - name: set fact set_fact: thing: otherstuff - name: myrole include_role: name: myrole vars: thing: "asdf" - name: myrole with_items: - "aone" - "atwo" include_role: name: myrole vars: thing: "{{ item }}" ``` ##### EXPECTED RESULTS Success ##### ACTUAL RESULTS Error part of the output ``` TASK [myrole] ****************************************************************** task path: /mnt/c/Users/trond/Documents/projects/ansibledev/rolestesting/main.yml:13 failed: [localhost] (item=aone) => { "failed": true, "item": "aone", "msg": "No role was specified to include" } failed: [localhost] (item=atwo) => { "failed": true, "item": "atwo", "msg": "No role was specified to include" } ERROR! Unexpected Exception: 'results' ```
True
include_role does not work with with_items - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME include_role ##### ANSIBLE VERSION ``` 2.3.0 ``` ##### SUMMARY The new "include_role" task does not work with with_items ##### STEPS TO REPRODUCE In the example playbook below, the role "myrole" is first run stand-alone, and then as a with_items loop. ``` --- - name: Do stuff to stuff hosts: localhost tasks: - name: set fact set_fact: thing: otherstuff - name: myrole include_role: name: myrole vars: thing: "asdf" - name: myrole with_items: - "aone" - "atwo" include_role: name: myrole vars: thing: "{{ item }}" ``` ##### EXPECTED RESULTS Success ##### ACTUAL RESULTS Error part of the output ``` TASK [myrole] ****************************************************************** task path: /mnt/c/Users/trond/Documents/projects/ansibledev/rolestesting/main.yml:13 failed: [localhost] (item=aone) => { "failed": true, "item": "aone", "msg": "No role was specified to include" } failed: [localhost] (item=atwo) => { "failed": true, "item": "atwo", "msg": "No role was specified to include" } ERROR! Unexpected Exception: 'results' ```
main
include role does not work with with items issue type bug report component name include role ansible version summary the new include role task does not work with with items steps to reproduce in the example playbook below the role myrole is first run stand alone and then as a with items loop name do stuff to stuff hosts localhost tasks name set fact set fact thing otherstuff name myrole include role name myrole vars thing asdf name myrole with items aone atwo include role name myrole vars thing item expected results success actual results error part of the output task task path mnt c users trond documents projects ansibledev rolestesting main yml failed item aone failed true item aone msg no role was specified to include failed item atwo failed true item atwo msg no role was specified to include error unexpected exception results
1
1,902
6,577,555,850
IssuesEvent
2017-09-12 01:44:10
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
GCE module missing local SSD option
affects_2.0 bug_report cloud feature_idea gce waiting_on_maintainer
##### Issue Type: Bug Report ##### Plugin Name: gce ##### Ansible Version: ``` ansible 2.0.1.0 config file = /Users/vwoo/.ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: ``` [defaults] host_key_checking = False ``` ##### Environment: N/A ##### Summary: Google Compute Engine has supported [local SSD scratch disks](https://cloud.google.com/compute/docs/disks/local-ssd) for a while. These are very useful, high-performance ephemeral disks you can attach to instances _only at create time_. Libcloud [supports creating instances with local SSDs](https://github.com/apache/libcloud/blob/trunk/demos/gce_demo.py#L331) already: (courtesy @erjohnso). However, the official [ansible gce module](http://docs.ansible.com/ansible/gce_module.html) does not provide a way to attach these local disks. ##### Steps To Reproduce: Ideally, we would like to be able to say something like: ``` gce:yml instance_names: example local_ssd: - interface: nvme - interface: nvme ``` which would create an instance with two local SSDs using the NVMe interface.
True
GCE module missing local SSD option - ##### Issue Type: Bug Report ##### Plugin Name: gce ##### Ansible Version: ``` ansible 2.0.1.0 config file = /Users/vwoo/.ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: ``` [defaults] host_key_checking = False ``` ##### Environment: N/A ##### Summary: Google Compute Engine has supported [local SSD scratch disks](https://cloud.google.com/compute/docs/disks/local-ssd) for a while. These are very useful, high-performance ephemeral disks you can attach to instances _only at create time_. Libcloud [supports creating instances with local SSDs](https://github.com/apache/libcloud/blob/trunk/demos/gce_demo.py#L331) already: (courtesy @erjohnso). However, the official [ansible gce module](http://docs.ansible.com/ansible/gce_module.html) does not provide a way to attach these local disks. ##### Steps To Reproduce: Ideally, we would like to be able to say something like: ``` gce:yml instance_names: example local_ssd: - interface: nvme - interface: nvme ``` which would create an instance with two local SSDs using the NVMe interface.
main
gce module missing local ssd option issue type bug report plugin name gce ansible version ansible config file users vwoo ansible cfg configured module search path default w o overrides ansible configuration host key checking false environment n a summary google compute engine has supported for a while these are very useful high performance ephemeral disks you can attach to instances only at create time libcloud already courtesy erjohnso however the official does not provide a way to attach these local disks steps to reproduce ideally we would like to be able to say something like gce yml instance names example local ssd interface nvme interface nvme which would create an instance with two local ssds using the nvme interface
1
5,237
26,552,541,242
IssuesEvent
2023-01-20 09:12:05
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
External dependencies are not resolved in CLion
type: bug type: feature request topic: bazel topic: external dependencies awaiting-maintainer
Attempting to use the plugin with CLion, it appears that external dependencies are not being resolved. The code at https://github.com/guyincognito24601/externalrepositorytest reproduces this issue. If you open mainprogram in CLion, you will see that the application builds correctly, however, the IDE fails to find the path `external/bar/hello.h`. ![image](https://user-images.githubusercontent.com/32616257/35867310-75966f86-0b27-11e8-84a7-3f58733b5de4.png)
True
External dependencies are not resolved in CLion - Attempting to use the plugin with CLion, it appears that external dependencies are not being resolved. The code at https://github.com/guyincognito24601/externalrepositorytest reproduces this issue. If you open mainprogram in CLion, you will see that the application builds correctly, however, the IDE fails to find the path `external/bar/hello.h`. ![image](https://user-images.githubusercontent.com/32616257/35867310-75966f86-0b27-11e8-84a7-3f58733b5de4.png)
main
external dependencies are not resolved in clion attempting to use the plugin with clion it appears that external dependencies are not being resolved the code at reproduces this issue if you open mainprogram in clion you will see that the application builds correctly however the ide fails to find the path external bar hello h
1
179,634
13,892,235,463
IssuesEvent
2020-10-19 11:54:38
CSOIreland/PxStat
https://api.github.com/repos/CSOIreland/PxStat
closed
[BUG] Invalid daily time range displayed in pill when Search or listing page used
bug fixed released tested
**Describe the bug** The incorrect daily time range is displayed for a table when the listing page or search option is used **To Reproduce** Searched for table CBM03 or selected from BERD Region listing **Expected behavior** Correct date should appear in the pill **Screenshots** Correct details taken form Last Updated page ![image](https://user-images.githubusercontent.com/44975474/90775698-0cebd800-e2f1-11ea-8f4b-3cc1ec3c7eb3.png) Incorrect details taken form search or listing page option ![image](https://user-images.githubusercontent.com/44975474/90775825-3b69b300-e2f1-11ea-9138-801bdb53c77a.png)
1.0
[BUG] Invalid daily time range displayed in pill when Search or listing page used - **Describe the bug** The incorrect daily time range is displayed for a table when the listing page or search option is used **To Reproduce** Searched for table CBM03 or selected from BERD Region listing **Expected behavior** Correct date should appear in the pill **Screenshots** Correct details taken form Last Updated page ![image](https://user-images.githubusercontent.com/44975474/90775698-0cebd800-e2f1-11ea-8f4b-3cc1ec3c7eb3.png) Incorrect details taken form search or listing page option ![image](https://user-images.githubusercontent.com/44975474/90775825-3b69b300-e2f1-11ea-9138-801bdb53c77a.png)
non_main
invalid daily time range displayed in pill when search or listing page used describe the bug the incorrect daily time range is displayed for a table when the listing page or search option is used to reproduce searched for table or selected from berd region listing expected behavior correct date should appear in the pill screenshots correct details taken form last updated page incorrect details taken form search or listing page option
0
1,714
6,574,460,855
IssuesEvent
2017-09-11 12:58:52
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Nxos_reboot ends in timeout error when it's successfull
affects_2.2 bug_report networking waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> nxos_reboot ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> [defaults] hostfile=localstage #hostfile=mas-b43 ansible_ssh_user=admin ansible_ssh_private_key_file=/home/emarq/.ssh/masd-rsa host_key_checking=False ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY <!--- Explain the problem briefly --> Switch is rebooted but Ansible errors out. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - name: copy configs hosts: - n35-bmc - basebmctemplate - n35-tor - basetortemplate - basetor40gtemplate - n35-agg - baseaggtemplate remote_user: admin gather_facts: no connection: local vars: cli: host: "{{ ansible_host }}" transport: cli username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub roles: - copyfirmware roles/copyfirmware/tasks/main.yml --- - nxos_reboot: provider: "{{ cli }}" confirm: true host: "{{ ansible_host }}" username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> reload switch end with a success value. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` TASK [copyfirmware : nxos_reboot] ********************************************** task path: /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/roles/copyfirmware/tasks/main.yml:23 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_reboot.py <10.10.228.60> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.228.60> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074 `" && echo ansible-tmp-1478208427.24-135809306234074="` echo $HOME/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074 `" ) && sleep 0' <10.10.228.60> PUT /tmp/tmp31WWF5 TO /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py <10.10.228.60> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/ /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py && sleep 0' <10.10.228.60> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py; rm -rf "/home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/" > /dev/null 2>&1 && sleep 0' fatal: [rr1-n22-r09-3132hl-3-1d]: FAILED! => { "changed": false, "error": "timeout trying to send command: reload\r", "failed": true, "invocation": { "module_args": { "auth_pass": null, "authorize": false, "config": null, "confirm": true, "host": "10.10.228.60", "include_defaults": "False", "password": null, "port": null, "provider": { "host": "10.10.228.60", "ssh_keyfile": "/srv/tftpboot/my-rsa.pub", "transport": "cli", "username": "admin" }, "save": false, "ssh_keyfile": "/srv/tftpboot/my-rsa.pub", "timeout": 10, "transport": "cli", "use_ssl": false, "username": "admin", "validate_certs": true }, "module_name": "nxos_reboot" }, "msg": "Error sending ['reload']" } to retry, use: --limit @/home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/nexusbaseconfig.retry ```
True
Nxos_reboot ends in timeout error when it's successfull - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> nxos_reboot ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> [defaults] hostfile=localstage #hostfile=mas-b43 ansible_ssh_user=admin ansible_ssh_private_key_file=/home/emarq/.ssh/masd-rsa host_key_checking=False ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Linux rr1masdansible 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux ##### SUMMARY <!--- Explain the problem briefly --> Switch is rebooted but Ansible errors out. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - name: copy configs hosts: - n35-bmc - basebmctemplate - n35-tor - basetortemplate - basetor40gtemplate - n35-agg - baseaggtemplate remote_user: admin gather_facts: no connection: local vars: cli: host: "{{ ansible_host }}" transport: cli username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub roles: - copyfirmware roles/copyfirmware/tasks/main.yml --- - nxos_reboot: provider: "{{ cli }}" confirm: true host: "{{ ansible_host }}" username: admin ssh_keyfile: /srv/tftpboot/my-rsa.pub ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> reload switch end with a success value. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` TASK [copyfirmware : nxos_reboot] ********************************************** task path: /home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/roles/copyfirmware/tasks/main.yml:23 Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/nxos/nxos_reboot.py <10.10.228.60> ESTABLISH LOCAL CONNECTION FOR USER: emarq <10.10.228.60> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074 `" && echo ansible-tmp-1478208427.24-135809306234074="` echo $HOME/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074 `" ) && sleep 0' <10.10.228.60> PUT /tmp/tmp31WWF5 TO /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py <10.10.228.60> EXEC /bin/sh -c 'chmod u+x /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/ /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py && sleep 0' <10.10.228.60> EXEC /bin/sh -c '/usr/bin/python /home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/nxos_reboot.py; rm -rf "/home/emarq/.ansible/tmp/ansible-tmp-1478208427.24-135809306234074/" > /dev/null 2>&1 && sleep 0' fatal: [rr1-n22-r09-3132hl-3-1d]: FAILED! => { "changed": false, "error": "timeout trying to send command: reload\r", "failed": true, "invocation": { "module_args": { "auth_pass": null, "authorize": false, "config": null, "confirm": true, "host": "10.10.228.60", "include_defaults": "False", "password": null, "port": null, "provider": { "host": "10.10.228.60", "ssh_keyfile": "/srv/tftpboot/my-rsa.pub", "transport": "cli", "username": "admin" }, "save": false, "ssh_keyfile": "/srv/tftpboot/my-rsa.pub", "timeout": 10, "transport": "cli", "use_ssl": false, "username": "admin", "validate_certs": true }, "module_name": "nxos_reboot" }, "msg": "Error sending ['reload']" } to retry, use: --limit @/home/emarq/Solutions.Network.Automation/MAS/Ansible/cisco/nexus/nexusbaseconfig.retry ```
main
nxos reboot ends in timeout error when it s successfull issue type bug report component name nxos reboot ansible version ansible config file home emarq solutions network automation mas ansible cisco nexus ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables hostfile localstage hostfile mas ansible ssh user admin ansible ssh private key file home emarq ssh masd rsa host key checking false os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux generic ubuntu smp wed oct utc gnu linux summary switch is rebooted but ansible errors out steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name copy configs hosts bmc basebmctemplate tor basetortemplate agg baseaggtemplate remote user admin gather facts no connection local vars cli host ansible host transport cli username admin ssh keyfile srv tftpboot my rsa pub roles copyfirmware roles copyfirmware tasks main yml nxos reboot provider cli confirm true host ansible host username admin ssh keyfile srv tftpboot my rsa pub expected results reload switch end with a success value actual results task task path home emarq solutions network automation mas ansible cisco nexus roles copyfirmware tasks main yml using module file usr lib dist packages ansible modules core network nxos nxos reboot py establish local connection for user emarq exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home emarq ansible tmp ansible tmp nxos reboot py exec bin sh c chmod u x home emarq ansible tmp ansible tmp home emarq ansible tmp ansible tmp nxos reboot py sleep exec bin sh c usr bin python home emarq ansible tmp ansible tmp nxos reboot py rm rf home emarq ansible tmp ansible tmp dev null sleep fatal failed changed false error timeout trying to send command reload r failed true invocation module args auth pass null authorize false config null confirm true host include defaults false password null port null provider host ssh keyfile srv tftpboot my rsa pub transport cli username admin save false ssh keyfile srv tftpboot my rsa pub timeout transport cli use ssl false username admin validate certs true module name nxos reboot msg error sending to retry use limit home emarq solutions network automation mas ansible cisco nexus nexusbaseconfig retry
1
209,300
23,708,308,252
IssuesEvent
2022-08-30 04:58:24
sureng-ws-ibm/go-revel-examples
https://api.github.com/repos/sureng-ws-ibm/go-revel-examples
closed
github.com/revel/revel-v0.21.0: 1 vulnerabilities (highest severity is: 6.5) - autoclosed
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/revel/revel-v0.21.0</b></p></summary> <p>A high productivity, full-stack web framework for the Go language.</p> <p> <p>Found in HEAD commit: <a href="https://github.com/sureng-ws-ibm/go-revel-examples/commit/b0438166a002979fc4778716b4927b41b7ec5ed5">b0438166a002979fc4778716b4927b41b7ec5ed5</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [WS-2021-0192](https://github.com/revel/revel/commit/d160ecb72207824005b19778594cbdc272e8a605) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | github.com/revel/revel-v0.21.0 | Direct | v1.0.0 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0192</summary> ### Vulnerable Library - <b>github.com/revel/revel-v0.21.0</b></p> <p>A high productivity, full-stack web framework for the Go language.</p> <p> Dependency Hierarchy: - :x: **github.com/revel/revel-v0.21.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sureng-ws-ibm/go-revel-examples/commit/b0438166a002979fc4778716b4927b41b7ec5ed5">b0438166a002979fc4778716b4927b41b7ec5ed5</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Unsanitized input in the query parser in github.com/revel/revel before v1.0.0 allows remote attackers to cause resource exhaustion via memory allocation. <p>Publish Date: 2021-04-14 <p>URL: <a href=https://github.com/revel/revel/commit/d160ecb72207824005b19778594cbdc272e8a605>WS-2021-0192</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>6.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0003">https://osv.dev/vulnerability/GO-2020-0003</a></p> <p>Release Date: 2021-04-14</p> <p>Fix Resolution: v1.0.0</p> </p> <p></p> </details>
True
github.com/revel/revel-v0.21.0: 1 vulnerabilities (highest severity is: 6.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/revel/revel-v0.21.0</b></p></summary> <p>A high productivity, full-stack web framework for the Go language.</p> <p> <p>Found in HEAD commit: <a href="https://github.com/sureng-ws-ibm/go-revel-examples/commit/b0438166a002979fc4778716b4927b41b7ec5ed5">b0438166a002979fc4778716b4927b41b7ec5ed5</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [WS-2021-0192](https://github.com/revel/revel/commit/d160ecb72207824005b19778594cbdc272e8a605) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | github.com/revel/revel-v0.21.0 | Direct | v1.0.0 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> WS-2021-0192</summary> ### Vulnerable Library - <b>github.com/revel/revel-v0.21.0</b></p> <p>A high productivity, full-stack web framework for the Go language.</p> <p> Dependency Hierarchy: - :x: **github.com/revel/revel-v0.21.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/sureng-ws-ibm/go-revel-examples/commit/b0438166a002979fc4778716b4927b41b7ec5ed5">b0438166a002979fc4778716b4927b41b7ec5ed5</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> Unsanitized input in the query parser in github.com/revel/revel before v1.0.0 allows remote attackers to cause resource exhaustion via memory allocation. <p>Publish Date: 2021-04-14 <p>URL: <a href=https://github.com/revel/revel/commit/d160ecb72207824005b19778594cbdc272e8a605>WS-2021-0192</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>6.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0003">https://osv.dev/vulnerability/GO-2020-0003</a></p> <p>Release Date: 2021-04-14</p> <p>Fix Resolution: v1.0.0</p> </p> <p></p> </details>
non_main
github com revel revel vulnerabilities highest severity is autoclosed vulnerable library github com revel revel a high productivity full stack web framework for the go language found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium github com revel revel direct details ws vulnerable library github com revel revel a high productivity full stack web framework for the go language dependency hierarchy x github com revel revel vulnerable library found in head commit a href found in base branch master vulnerability details unsanitized input in the query parser in github com revel revel before allows remote attackers to cause resource exhaustion via memory allocation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
29,965
14,356,163,236
IssuesEvent
2020-11-30 11:08:00
mozilla/addons-server
https://api.github.com/repos/mozilla/addons-server
opened
Investigate taar and taar-lite powered API endpoints performance
component: api component: performance
This is probably out of our control, but taar and taar-lite powered API endpoints are considerably slower than they used to be: ![Screenshot_2020-11-30 AMO Prod frontend APIs usage performance - Grafana(2)](https://user-images.githubusercontent.com/187006/100602685-8342a200-3304-11eb-86f9-5c3713a2fa8f.png) ![Screenshot_2020-11-30 AMO Prod frontend APIs usage performance - Grafana(1)](https://user-images.githubusercontent.com/187006/100602686-83db3880-3304-11eb-95e6-34f0e8f1349f.png) This is probably a combination of their migration to GCP and/or changes on their side, but we should investigate to find out if there is something we did that caused this, and what we could do to improve performance regardless of the cause.
True
Investigate taar and taar-lite powered API endpoints performance - This is probably out of our control, but taar and taar-lite powered API endpoints are considerably slower than they used to be: ![Screenshot_2020-11-30 AMO Prod frontend APIs usage performance - Grafana(2)](https://user-images.githubusercontent.com/187006/100602685-8342a200-3304-11eb-86f9-5c3713a2fa8f.png) ![Screenshot_2020-11-30 AMO Prod frontend APIs usage performance - Grafana(1)](https://user-images.githubusercontent.com/187006/100602686-83db3880-3304-11eb-95e6-34f0e8f1349f.png) This is probably a combination of their migration to GCP and/or changes on their side, but we should investigate to find out if there is something we did that caused this, and what we could do to improve performance regardless of the cause.
non_main
investigate taar and taar lite powered api endpoints performance this is probably out of our control but taar and taar lite powered api endpoints are considerably slower than they used to be this is probably a combination of their migration to gcp and or changes on their side but we should investigate to find out if there is something we did that caused this and what we could do to improve performance regardless of the cause
0
4,771
24,584,192,362
IssuesEvent
2022-10-13 18:09:41
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
[Feature Request]: add event handler for SideNavMenu expanding
type: enhancement 💡 proposal: needs more research 🕵️‍♀️ status: waiting for maintainer response 💬
### Summary It would be desirable to be able to call a function when a SideNavMenu opens or closes. ### Justification I would like this to be able to show some extra information depending on what SideNavMenu was last "touched". This is specifically useful if you're grouping items and want to give an overview of the groups content. Its possible to do the following as a workaround, however it goes against the `string` prop type for `title` and therefore can cause errors. ``` <SideNavMenu title={( <span onClick={() => {e.stopPropagation(); myHandler();} > My group </span> )} > ... </SideNavMenu> ``` ### Desired UX and success metrics Be able to call a function when the the SideNavMenu is opened or closed ### Required functionality It could be an `onClick` (or similar) prop given to the component. It may be good to know if the click resulted in the menu being expanded or vice versa to use in the function. `onClick={(expanded) => {...}}` or `onChange` `onChange={(expanded) => {...}}` I think this would be a better DX than allowing objects/components in the `title` prop like the workaround mentioned earlier. ### Specific timeline issues / requests _No response_ ### Available extra resources _No response_ ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
True
[Feature Request]: add event handler for SideNavMenu expanding - ### Summary It would be desirable to be able to call a function when a SideNavMenu opens or closes. ### Justification I would like this to be able to show some extra information depending on what SideNavMenu was last "touched". This is specifically useful if you're grouping items and want to give an overview of the groups content. Its possible to do the following as a workaround, however it goes against the `string` prop type for `title` and therefore can cause errors. ``` <SideNavMenu title={( <span onClick={() => {e.stopPropagation(); myHandler();} > My group </span> )} > ... </SideNavMenu> ``` ### Desired UX and success metrics Be able to call a function when the the SideNavMenu is opened or closed ### Required functionality It could be an `onClick` (or similar) prop given to the component. It may be good to know if the click resulted in the menu being expanded or vice versa to use in the function. `onClick={(expanded) => {...}}` or `onChange` `onChange={(expanded) => {...}}` I think this would be a better DX than allowing objects/components in the `title` prop like the workaround mentioned earlier. ### Specific timeline issues / requests _No response_ ### Available extra resources _No response_ ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
main
add event handler for sidenavmenu expanding summary it would be desirable to be able to call a function when a sidenavmenu opens or closes justification i would like this to be able to show some extra information depending on what sidenavmenu was last touched this is specifically useful if you re grouping items and want to give an overview of the groups content its possible to do the following as a workaround however it goes against the string prop type for title and therefore can cause errors sidenavmenu title span onclick e stoppropagation myhandler my group desired ux and success metrics be able to call a function when the the sidenavmenu is opened or closed required functionality it could be an onclick or similar prop given to the component it may be good to know if the click resulted in the menu being expanded or vice versa to use in the function onclick expanded or onchange onchange expanded i think this would be a better dx than allowing objects components in the title prop like the workaround mentioned earlier specific timeline issues requests no response available extra resources no response code of conduct i agree to follow this project s
1
169,523
20,841,772,391
IssuesEvent
2022-03-21 01:29:59
UpendoVentures/Page-Settings-Editor
https://api.github.com/repos/UpendoVentures/Page-Settings-Editor
opened
CVE-2022-24773 (Medium) detected in node-forge-0.10.0.tgz
security vulnerability
## CVE-2022-24773 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p> <p>Path to dependency file: /Modules/PageSettingsEditor/package.json</p> <p>Path to vulnerable library: /Modules/PageSettingsEditor/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - webpack-dev-server-3.11.0.tgz (Root Library) - selfsigned-1.10.8.tgz - :x: **node-forge-0.10.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not properly check `DigestInfo` for a proper ASN.1 structure. This can lead to successful verification with signatures that contain invalid structures but a valid digest. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds. <p>Publish Date: 2022-03-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24773>CVE-2022-24773</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24773">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24773</a></p> <p>Release Date: 2022-03-18</p> <p>Fix Resolution: node-forge - 1.3.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-24773 (Medium) detected in node-forge-0.10.0.tgz - ## CVE-2022-24773 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p> <p>Path to dependency file: /Modules/PageSettingsEditor/package.json</p> <p>Path to vulnerable library: /Modules/PageSettingsEditor/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - webpack-dev-server-3.11.0.tgz (Root Library) - selfsigned-1.10.8.tgz - :x: **node-forge-0.10.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not properly check `DigestInfo` for a proper ASN.1 structure. This can lead to successful verification with signatures that contain invalid structures but a valid digest. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds. <p>Publish Date: 2022-03-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24773>CVE-2022-24773</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24773">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24773</a></p> <p>Release Date: 2022-03-18</p> <p>Fix Resolution: node-forge - 1.3.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in node forge tgz cve medium severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file modules pagesettingseditor package json path to vulnerable library modules pagesettingseditor node modules node forge package json dependency hierarchy webpack dev server tgz root library selfsigned tgz x node forge tgz vulnerable library found in base branch main vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code does not properly check digestinfo for a proper asn structure this can lead to successful verification with signatures that contain invalid structures but a valid digest the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource
0
3,410
13,182,074,136
IssuesEvent
2020-08-12 15:13:41
duo-labs/cloudmapper
https://api.github.com/repos/duo-labs/cloudmapper
closed
Feature request: Data flow between SNS, SQS, S3, Lambda
map unmaintained_functionality
## User story As a cloud security auditor I would like to have a tool that shows me how AWS services exchange information so that I can better understand attack vectors. I would like to see a new command in cloudmapper that graphs: * Messages sent to SNS topic X are sent to SQS queue Y * Lambda function Z is triggered when SQS Y message is received * S3Put on bucket W triggers ... ## Requirements If SNS topic X are sent to SQS queue Y and lambda function Z is called when a message arrives to SQS queue Y, then that should be graphed as three dots (X, Y, Z) with lines connecting X->Y , Y-Z. ## References [Tweet](https://twitter.com/AndresRiancho/status/1099053968687329288)
True
Feature request: Data flow between SNS, SQS, S3, Lambda - ## User story As a cloud security auditor I would like to have a tool that shows me how AWS services exchange information so that I can better understand attack vectors. I would like to see a new command in cloudmapper that graphs: * Messages sent to SNS topic X are sent to SQS queue Y * Lambda function Z is triggered when SQS Y message is received * S3Put on bucket W triggers ... ## Requirements If SNS topic X are sent to SQS queue Y and lambda function Z is called when a message arrives to SQS queue Y, then that should be graphed as three dots (X, Y, Z) with lines connecting X->Y , Y-Z. ## References [Tweet](https://twitter.com/AndresRiancho/status/1099053968687329288)
main
feature request data flow between sns sqs lambda user story as a cloud security auditor i would like to have a tool that shows me how aws services exchange information so that i can better understand attack vectors i would like to see a new command in cloudmapper that graphs messages sent to sns topic x are sent to sqs queue y lambda function z is triggered when sqs y message is received on bucket w triggers requirements if sns topic x are sent to sqs queue y and lambda function z is called when a message arrives to sqs queue y then that should be graphed as three dots x y z with lines connecting x y y z references
1
55,982
6,497,577,365
IssuesEvent
2017-08-22 14:27:23
jiscdev/data-explorer
https://api.github.com/repos/jiscdev/data-explorer
reopened
Module view -- VLE content use -- scale on bottom ⚖104
accepted @ high priority feature highcharts please test released on dev
glos: VLE content bar chart – scale only appears at the bottom, so would be useful to have at the top e.g. Introduction to Business Law has > 30 items so you can’t gauge the scale w/o scrolling Noted on a smaller module depths of bars increase so still had to scroll to see scale at the bottom Possible freeze plan for scale so it can always be seen.
1.0
Module view -- VLE content use -- scale on bottom ⚖104 - glos: VLE content bar chart – scale only appears at the bottom, so would be useful to have at the top e.g. Introduction to Business Law has > 30 items so you can’t gauge the scale w/o scrolling Noted on a smaller module depths of bars increase so still had to scroll to see scale at the bottom Possible freeze plan for scale so it can always be seen.
non_main
module view vle content use scale on bottom ⚖ glos vle content bar chart – scale only appears at the bottom so would be useful to have at the top e g introduction to business law has items so you can’t gauge the scale w o scrolling noted on a smaller module depths of bars increase so still had to scroll to see scale at the bottom possible freeze plan for scale so it can always be seen
0
1,466
6,364,268,812
IssuesEvent
2017-07-31 19:16:29
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
Could not find imported module support code for 'Ansible.ModuleUtils.PowerShellLegacy'
affects_2.4 bug_report module needs_maintainer support:core windows
<!--- Verify first that your issue/request is not already reported on GitHub. Also test if the latest release, and master branch are affected too. --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the module/plugin/task/feature --> Windows support ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.4.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/test/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> Irrelevant ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. Also mention the specific version of what you are trying to control, e.g. if this is a network bug the version of firmware on the network device. --> Ubuntu 16.04.2 (WSL) control machine to Windows Server 2016 with WMF 5.1 ##### SUMMARY <!--- Explain the problem briefly --> Most modules that can be run on Windows fail in latest 2.4 `devel` branch: all `win_*` I tried and `setup` too. However, `raw` does work. Switching to ...ansible.git@stable-2.3 everything works again. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ```sh sudo -H pip install git+https://github.com/ansible/ansible.git@devel ansible -i dev -m setup [windows_target] ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Successful task execution ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` windows-target | FAILED! => { "failed": true, "msg": "Could not find imported module support code for 'Ansible.ModuleUtils.PowerShellLegacy'." } ```
True
Could not find imported module support code for 'Ansible.ModuleUtils.PowerShellLegacy' - <!--- Verify first that your issue/request is not already reported on GitHub. Also test if the latest release, and master branch are affected too. --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the module/plugin/task/feature --> Windows support ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.4.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/test/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible executable location = /usr/local/bin/ansible python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> Irrelevant ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. Also mention the specific version of what you are trying to control, e.g. if this is a network bug the version of firmware on the network device. --> Ubuntu 16.04.2 (WSL) control machine to Windows Server 2016 with WMF 5.1 ##### SUMMARY <!--- Explain the problem briefly --> Most modules that can be run on Windows fail in latest 2.4 `devel` branch: all `win_*` I tried and `setup` too. However, `raw` does work. Switching to ...ansible.git@stable-2.3 everything works again. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ```sh sudo -H pip install git+https://github.com/ansible/ansible.git@devel ansible -i dev -m setup [windows_target] ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Successful task execution ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` windows-target | FAILED! => { "failed": true, "msg": "Could not find imported module support code for 'Ansible.ModuleUtils.PowerShellLegacy'." } ```
main
could not find imported module support code for ansible moduleutils powershelllegacy verify first that your issue request is not already reported on github also test if the latest release and master branch are affected too issue type bug report component name windows support ansible version ansible config file etc ansible ansible cfg configured module search path ansible python module location usr local lib dist packages ansible executable location usr local bin ansible python version default nov configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables irrelevant os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific also mention the specific version of what you are trying to control e g if this is a network bug the version of firmware on the network device ubuntu wsl control machine to windows server with wmf summary most modules that can be run on windows fail in latest devel branch all win i tried and setup too however raw does work switching to ansible git stable everything works again steps to reproduce for bugs show exactly how to reproduce the problem using a minimal test case for new features show how the feature would be used sh sudo h pip install git ansible i dev m setup expected results successful task execution actual results windows target failed failed true msg could not find imported module support code for ansible moduleutils powershelllegacy
1
2,702
5,557,721,838
IssuesEvent
2017-03-24 12:56:50
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
closed
Add interface for #1372
enhancement preprocessor
@houtanb To identify the needed modifications look at #1372 and the new entries in the reference manual.
1.0
Add interface for #1372 - @houtanb To identify the needed modifications look at #1372 and the new entries in the reference manual.
non_main
add interface for houtanb to identify the needed modifications look at and the new entries in the reference manual
0
34,486
7,452,107,073
IssuesEvent
2018-03-29 07:03:55
kerdokullamae/test_koik_issued
https://api.github.com/repos/kerdokullamae/test_koik_issued
closed
Lisada kirjeldusüksustele lingid välistesse süsteemidesse
P: high R: fixed T: defect
**Reported by sven syld on 17 May 2013 08:30 UTC** '''Object''' Lingid välistesse süsteemidesse (nt spec lk 33) '''Todo''' Lisada KÜ detail- (spec lk 33) ja nimekirjavaateisse (lk 26) lingid välistesse süsteemidesse.
1.0
Lisada kirjeldusüksustele lingid välistesse süsteemidesse - **Reported by sven syld on 17 May 2013 08:30 UTC** '''Object''' Lingid välistesse süsteemidesse (nt spec lk 33) '''Todo''' Lisada KÜ detail- (spec lk 33) ja nimekirjavaateisse (lk 26) lingid välistesse süsteemidesse.
non_main
lisada kirjeldusüksustele lingid välistesse süsteemidesse reported by sven syld on may utc object lingid välistesse süsteemidesse nt spec lk todo lisada kü detail spec lk ja nimekirjavaateisse lk lingid välistesse süsteemidesse
0
523,050
15,171,461,696
IssuesEvent
2021-02-13 03:20:30
crombird/meta
https://api.github.com/repos/crombird/meta
closed
Track parent pages
priority/3-medium type/feature-request
It'd be pretty neat if Crom were able to track to which page a give page is parented, if any. An application consuming the API would then be able to construct full parent-child relationships if it maintains a database of all articles, but if not, it might be useful to also expose for each page a list of pages that consider it their parent.
1.0
Track parent pages - It'd be pretty neat if Crom were able to track to which page a give page is parented, if any. An application consuming the API would then be able to construct full parent-child relationships if it maintains a database of all articles, but if not, it might be useful to also expose for each page a list of pages that consider it their parent.
non_main
track parent pages it d be pretty neat if crom were able to track to which page a give page is parented if any an application consuming the api would then be able to construct full parent child relationships if it maintains a database of all articles but if not it might be useful to also expose for each page a list of pages that consider it their parent
0
240,811
20,074,597,516
IssuesEvent
2022-02-04 11:13:27
microsoft/FluidFramework
https://api.github.com/repos/microsoft/FluidFramework
opened
Enable new binary wire format in stress tests.
area: test
## Work Item Enable the new odsp fluid binary wire format to be tested in stress tests. <!-- By filing an Issue, you are expected to comply with the Code of Conduct: https://github.com/microsoft/FluidFramework/blob/main/CODE_OF_CONDUCT.md --> <!-- Lastly, be sure to preview your issue before saving. Thanks! -->
1.0
Enable new binary wire format in stress tests. - ## Work Item Enable the new odsp fluid binary wire format to be tested in stress tests. <!-- By filing an Issue, you are expected to comply with the Code of Conduct: https://github.com/microsoft/FluidFramework/blob/main/CODE_OF_CONDUCT.md --> <!-- Lastly, be sure to preview your issue before saving. Thanks! -->
non_main
enable new binary wire format in stress tests work item enable the new odsp fluid binary wire format to be tested in stress tests
0
286,742
24,779,937,895
IssuesEvent
2022-10-24 03:15:21
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
opened
[CI] AutoscalingCalculateCapacityServiceTests testContext failing
>test-failure :Distributed/Autoscaling
It failed on my PR, but is reproducible on the main branch. **Build scan:** https://gradle-enterprise.elastic.co/s/it52kxbr5nvyw/tests/:x-pack:plugin:autoscaling:test/org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCalculateCapacityServiceTests/testContext **Reproduction line:** `./gradlew ':x-pack:plugin:autoscaling:test' --tests "org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCalculateCapacityServiceTests.testContext" -Dtests.seed=BA22DB9E59A3CBC0 -Dtests.locale=en-AU -Dtests.timezone=Pacific/Samoa -Druntime.java=17` **Applicable branches:** main **Reproduces locally?:** Yes **Failure history:** https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCalculateCapacityServiceTests&tests.test=testContext **Failure excerpt:** ``` java.lang.NullPointerException: Cannot invoke "org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCapacity.node()" because the return value of "org.elasticsearch.xpack.autoscaling.capacity.AutoscalingDeciderContext.currentCapacity()" is null at __randomizedtesting.SeedInfo.seed([BA22DB9E59A3CBC0:2C172B334C87EC0E]:0) at org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCalculateCapacityServiceTests.testContext(AutoscalingCalculateCapacityServiceTests.java:225) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:568) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850) at java.lang.Thread.run(Thread.java:833) ```
1.0
[CI] AutoscalingCalculateCapacityServiceTests testContext failing - It failed on my PR, but is reproducible on the main branch. **Build scan:** https://gradle-enterprise.elastic.co/s/it52kxbr5nvyw/tests/:x-pack:plugin:autoscaling:test/org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCalculateCapacityServiceTests/testContext **Reproduction line:** `./gradlew ':x-pack:plugin:autoscaling:test' --tests "org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCalculateCapacityServiceTests.testContext" -Dtests.seed=BA22DB9E59A3CBC0 -Dtests.locale=en-AU -Dtests.timezone=Pacific/Samoa -Druntime.java=17` **Applicable branches:** main **Reproduces locally?:** Yes **Failure history:** https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCalculateCapacityServiceTests&tests.test=testContext **Failure excerpt:** ``` java.lang.NullPointerException: Cannot invoke "org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCapacity.node()" because the return value of "org.elasticsearch.xpack.autoscaling.capacity.AutoscalingDeciderContext.currentCapacity()" is null at __randomizedtesting.SeedInfo.seed([BA22DB9E59A3CBC0:2C172B334C87EC0E]:0) at org.elasticsearch.xpack.autoscaling.capacity.AutoscalingCalculateCapacityServiceTests.testContext(AutoscalingCalculateCapacityServiceTests.java:225) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:568) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850) at java.lang.Thread.run(Thread.java:833) ```
non_main
autoscalingcalculatecapacityservicetests testcontext failing it failed on my pr but is reproducible on the main branch build scan reproduction line gradlew x pack plugin autoscaling test tests org elasticsearch xpack autoscaling capacity autoscalingcalculatecapacityservicetests testcontext dtests seed dtests locale en au dtests timezone pacific samoa druntime java applicable branches main reproduces locally yes failure history failure excerpt java lang nullpointerexception cannot invoke org elasticsearch xpack autoscaling capacity autoscalingcapacity node because the return value of org elasticsearch xpack autoscaling capacity autoscalingdecidercontext currentcapacity is null at randomizedtesting seedinfo seed at org elasticsearch xpack autoscaling capacity autoscalingcalculatecapacityservicetests testcontext autoscalingcalculatecapacityservicetests java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
0
91,531
11,516,833,462
IssuesEvent
2020-02-14 06:35:11
cockpit-project/cockpit
https://api.github.com/repos/cockpit-project/cockpit
closed
add 802.1x security
enhancement needsdesign review-2019-10
we need to implement 802.1x security for ethernet and bond/team/bridge slaves. This is pretty crucial.. in nm-connection-editor we support MD5, TLS, TTLS (with ton of inner auth protocols), PEAP (at least GTC, MSCHAP2, and MD5 based authentication), FAST and password based (w/o md5) login.
1.0
add 802.1x security - we need to implement 802.1x security for ethernet and bond/team/bridge slaves. This is pretty crucial.. in nm-connection-editor we support MD5, TLS, TTLS (with ton of inner auth protocols), PEAP (at least GTC, MSCHAP2, and MD5 based authentication), FAST and password based (w/o md5) login.
non_main
add security we need to implement security for ethernet and bond team bridge slaves this is pretty crucial in nm connection editor we support tls ttls with ton of inner auth protocols peap at least gtc and based authentication fast and password based w o login
0
129,187
12,401,102,619
IssuesEvent
2020-05-21 09:11:43
jmtc7/autoware-course
https://api.github.com/repos/jmtc7/autoware-course
closed
Company links in README
documentation
Add hyper-references to the websites of each company listed in the _Collaborators_ section of the repo's README.
1.0
Company links in README - Add hyper-references to the websites of each company listed in the _Collaborators_ section of the repo's README.
non_main
company links in readme add hyper references to the websites of each company listed in the collaborators section of the repo s readme
0
2,785
9,985,070,075
IssuesEvent
2019-07-10 15:44:27
dgets/lasttime
https://api.github.com/repos/dgets/lasttime
closed
Implement loop around consolidation routine for full database consolidation
enhancement maintainability
Issue #131 consolidates the database with the behavior that I was looking for, but does not fully account for further consolidations of previously consolidated data that may still be possible. Implement a `while` loop around here in order to keep processing this loop multiple times until all possible consolidations/compression of the data has been accomplished. This is much better behavior than requiring the user to consolidate multiple times until no further consolidations are possible.
True
Implement loop around consolidation routine for full database consolidation - Issue #131 consolidates the database with the behavior that I was looking for, but does not fully account for further consolidations of previously consolidated data that may still be possible. Implement a `while` loop around here in order to keep processing this loop multiple times until all possible consolidations/compression of the data has been accomplished. This is much better behavior than requiring the user to consolidate multiple times until no further consolidations are possible.
main
implement loop around consolidation routine for full database consolidation issue consolidates the database with the behavior that i was looking for but does not fully account for further consolidations of previously consolidated data that may still be possible implement a while loop around here in order to keep processing this loop multiple times until all possible consolidations compression of the data has been accomplished this is much better behavior than requiring the user to consolidate multiple times until no further consolidations are possible
1
480,113
13,823,535,573
IssuesEvent
2020-10-13 07:09:40
numbersprotocol/starling-capture
https://api.github.com/repos/numbersprotocol/starling-capture
opened
Swipe to View Next/Prev Proof Details
priority:medium uiux
Implement gesture to swipe between next/prev proof details.
1.0
Swipe to View Next/Prev Proof Details - Implement gesture to swipe between next/prev proof details.
non_main
swipe to view next prev proof details implement gesture to swipe between next prev proof details
0
105,041
16,623,634,634
IssuesEvent
2021-06-03 06:46:19
Thanraj/OpenSSL_1.0.1
https://api.github.com/repos/Thanraj/OpenSSL_1.0.1
opened
CVE-2013-6449 (Medium) detected in opensslOpenSSL_1_0_1
security vulnerability
## CVE-2013-6449 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensslOpenSSL_1_0_1</b></p></summary> <p> <p>Akamai fork of openssl master.</p> <p>Library home page: <a href=https://github.com/akamai/openssl.git>https://github.com/akamai/openssl.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/Thanraj/OpenSSL_1.0.1/commit/f1fe40536a9d3c961cc1415e9dd6d4fd002b61dc">f1fe40536a9d3c961cc1415e9dd6d4fd002b61dc</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/s3_lib.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/s3_lib.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/s3_lib.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The ssl_get_algorithm2 function in ssl/s3_lib.c in OpenSSL before 1.0.2 obtains a certain version number from an incorrect data structure, which allows remote attackers to cause a denial of service (daemon crash) via crafted traffic from a TLS 1.2 client. <p>Publish Date: 2013-12-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-6449>CVE-2013-6449</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-6449">https://nvd.nist.gov/vuln/detail/CVE-2013-6449</a></p> <p>Release Date: 2013-12-23</p> <p>Fix Resolution: 1.0.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2013-6449 (Medium) detected in opensslOpenSSL_1_0_1 - ## CVE-2013-6449 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensslOpenSSL_1_0_1</b></p></summary> <p> <p>Akamai fork of openssl master.</p> <p>Library home page: <a href=https://github.com/akamai/openssl.git>https://github.com/akamai/openssl.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/Thanraj/OpenSSL_1.0.1/commit/f1fe40536a9d3c961cc1415e9dd6d4fd002b61dc">f1fe40536a9d3c961cc1415e9dd6d4fd002b61dc</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/s3_lib.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/s3_lib.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/s3_lib.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The ssl_get_algorithm2 function in ssl/s3_lib.c in OpenSSL before 1.0.2 obtains a certain version number from an incorrect data structure, which allows remote attackers to cause a denial of service (daemon crash) via crafted traffic from a TLS 1.2 client. <p>Publish Date: 2013-12-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-6449>CVE-2013-6449</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-6449">https://nvd.nist.gov/vuln/detail/CVE-2013-6449</a></p> <p>Release Date: 2013-12-23</p> <p>Fix Resolution: 1.0.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in opensslopenssl cve medium severity vulnerability vulnerable library opensslopenssl akamai fork of openssl master library home page a href found in head commit a href found in base branch master vulnerable source files openssl ssl lib c openssl ssl lib c openssl ssl lib c vulnerability details the ssl get function in ssl lib c in openssl before obtains a certain version number from an incorrect data structure which allows remote attackers to cause a denial of service daemon crash via crafted traffic from a tls client publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
2,242
7,888,952,644
IssuesEvent
2018-06-28 00:58:05
react-navigation/react-navigation
https://api.github.com/repos/react-navigation/react-navigation
closed
The back button always closes the app after a reload due to app being closed from memory pressure
needs action from maintainer
Hello there! I originally posted this issue on Expo but maybe you guys have some thoughts about it. After the app is reloaded from background the back button always closes the app, it works great inside the Expo client but not in standalone apps. I built a version of the NavigationPlayground app where I can reproduce the issue : https://expo.io/@phorque/NavigationPlayground (the APK is here https://exp-shell-app-assets.s3-us-west-1.amazonaws.com/android%2F%40phorque%2FNavigationPlayground-dfc90ac0-5e97-11e8-9603-0a580a780605-signed.apk ) I also made a small test app with Sdk 27 here https://expo.io/@phorque/lol2 (the code can be found here https://github.com/phorque/test-expo-navigation-bug and the APK is here https://exp-shell-app-assets.s3-us-west-1.amazonaws.com/android%2F%40phorque%2Flol2-42cbd554-5ea1-11e8-bb3e-0a580a780305-signed.apk ) ### Environment ``` Environment: OS: Linux 4.13 Node: 8.11.1 Yarn: 1.2.1 npm: 5.6.0 Watchman: Not Found Xcode: N/A Android Studio: Not Found Packages: (wanted => installed) expo: 26.0.0 => 26.0.0 react: 16.3.0-alpha.1 => 16.3.0-alpha.1 react-native: https://github.com/expo/react-native/archive/sdk-26.0.0.tar.gz => 0.54.2 Diagnostics report: https://exp-xde-diagnostics.s3.amazonaws.com/phorque-a3331f72-2433-4d7a-8181-8d4487f63e7b.tar.gz ``` The issue occurs on Android standalone app, I couldn't test it on iOS. ### Steps to Reproduce Every time the application reloads after it was backgrounded the back button always close the app. The best way to reproduce it is to: * set the background process limits to "No background process" in developper options ; * open the app (ex NavigationPlayground) * switch to another app to put the first one in background * go back to the first app * click on a subroute (for example the "Stack example" in NavigationPlayground) * press the back button. The process is pretty much the same with my small test app : ##### Before background * open the app * click on profile * click on the back button * the app goes back to users ##### After the app is reloaded from background * click on profile * click on the back button * the app closes ### Expected Behavior I'd expect the app to goes back to the main menu. ### Actual Behavior The app close and never goes back to the main menu. ### Reproducible Demo https://expo.io/@phorque/NavigationPlayground and https://expo.io/@phorque/lol2 ``` import React from 'react'; import { StyleSheet, Text, View } from 'react-native'; import { createStackNavigator, createBottomTabNavigator } from 'react-navigation'; class Users extends React.Component { render() { return (<Text>Users</Text>) } } class Profile extends React.Component { render() { return (<Text>Profile</Text>) } } const Home = createBottomTabNavigator( { Users, Profile } ); const MainNavigator = createStackNavigator( { Home, } ); export default MainNavigator; ``` ### Stuff I already tried I tried a lot of stuff x) * pretty much all combinations of navigators * setting the `BackHandler` event by hand to make it always return true * setting the `BackHandler` event in a setInterval and make it always return true so I'm sure I wasn't hijacked by another library (of course react-navigation stops working with this solution, it was just to investigate) * make the `BackHandler` event of react-navigation always return true * I tried react-navigation ~1.5 and ~2.0 * I tried with Expo SDK26 and Expo SDK27 Before the application is backgrounded the `BackHandler` events are fired and I can successfully prevent the app from closing at all (it's not the behavior I intended at first, but I was pretty desperate), however if the application reloads after being backgrounded the `BackHandler` doesn't seem to fire at all and the app always close, whatever I do. AFAIK, all people who tested the app had the same behavior. I also can reproduce this issue in an emulator. Also, the NavigationPlayground on the PlayStore works great, so I'm really confused :c Please let me know if you need more explanations or tests from me! Thanks a lot for your great work :3
True
The back button always closes the app after a reload due to app being closed from memory pressure - Hello there! I originally posted this issue on Expo but maybe you guys have some thoughts about it. After the app is reloaded from background the back button always closes the app, it works great inside the Expo client but not in standalone apps. I built a version of the NavigationPlayground app where I can reproduce the issue : https://expo.io/@phorque/NavigationPlayground (the APK is here https://exp-shell-app-assets.s3-us-west-1.amazonaws.com/android%2F%40phorque%2FNavigationPlayground-dfc90ac0-5e97-11e8-9603-0a580a780605-signed.apk ) I also made a small test app with Sdk 27 here https://expo.io/@phorque/lol2 (the code can be found here https://github.com/phorque/test-expo-navigation-bug and the APK is here https://exp-shell-app-assets.s3-us-west-1.amazonaws.com/android%2F%40phorque%2Flol2-42cbd554-5ea1-11e8-bb3e-0a580a780305-signed.apk ) ### Environment ``` Environment: OS: Linux 4.13 Node: 8.11.1 Yarn: 1.2.1 npm: 5.6.0 Watchman: Not Found Xcode: N/A Android Studio: Not Found Packages: (wanted => installed) expo: 26.0.0 => 26.0.0 react: 16.3.0-alpha.1 => 16.3.0-alpha.1 react-native: https://github.com/expo/react-native/archive/sdk-26.0.0.tar.gz => 0.54.2 Diagnostics report: https://exp-xde-diagnostics.s3.amazonaws.com/phorque-a3331f72-2433-4d7a-8181-8d4487f63e7b.tar.gz ``` The issue occurs on Android standalone app, I couldn't test it on iOS. ### Steps to Reproduce Every time the application reloads after it was backgrounded the back button always close the app. The best way to reproduce it is to: * set the background process limits to "No background process" in developper options ; * open the app (ex NavigationPlayground) * switch to another app to put the first one in background * go back to the first app * click on a subroute (for example the "Stack example" in NavigationPlayground) * press the back button. The process is pretty much the same with my small test app : ##### Before background * open the app * click on profile * click on the back button * the app goes back to users ##### After the app is reloaded from background * click on profile * click on the back button * the app closes ### Expected Behavior I'd expect the app to goes back to the main menu. ### Actual Behavior The app close and never goes back to the main menu. ### Reproducible Demo https://expo.io/@phorque/NavigationPlayground and https://expo.io/@phorque/lol2 ``` import React from 'react'; import { StyleSheet, Text, View } from 'react-native'; import { createStackNavigator, createBottomTabNavigator } from 'react-navigation'; class Users extends React.Component { render() { return (<Text>Users</Text>) } } class Profile extends React.Component { render() { return (<Text>Profile</Text>) } } const Home = createBottomTabNavigator( { Users, Profile } ); const MainNavigator = createStackNavigator( { Home, } ); export default MainNavigator; ``` ### Stuff I already tried I tried a lot of stuff x) * pretty much all combinations of navigators * setting the `BackHandler` event by hand to make it always return true * setting the `BackHandler` event in a setInterval and make it always return true so I'm sure I wasn't hijacked by another library (of course react-navigation stops working with this solution, it was just to investigate) * make the `BackHandler` event of react-navigation always return true * I tried react-navigation ~1.5 and ~2.0 * I tried with Expo SDK26 and Expo SDK27 Before the application is backgrounded the `BackHandler` events are fired and I can successfully prevent the app from closing at all (it's not the behavior I intended at first, but I was pretty desperate), however if the application reloads after being backgrounded the `BackHandler` doesn't seem to fire at all and the app always close, whatever I do. AFAIK, all people who tested the app had the same behavior. I also can reproduce this issue in an emulator. Also, the NavigationPlayground on the PlayStore works great, so I'm really confused :c Please let me know if you need more explanations or tests from me! Thanks a lot for your great work :3
main
the back button always closes the app after a reload due to app being closed from memory pressure hello there i originally posted this issue on expo but maybe you guys have some thoughts about it after the app is reloaded from background the back button always closes the app it works great inside the expo client but not in standalone apps i built a version of the navigationplayground app where i can reproduce the issue the apk is here i also made a small test app with sdk here the code can be found here and the apk is here environment environment os linux node yarn npm watchman not found xcode n a android studio not found packages wanted installed expo react alpha alpha react native diagnostics report the issue occurs on android standalone app i couldn t test it on ios steps to reproduce every time the application reloads after it was backgrounded the back button always close the app the best way to reproduce it is to set the background process limits to no background process in developper options open the app ex navigationplayground switch to another app to put the first one in background go back to the first app click on a subroute for example the stack example in navigationplayground press the back button the process is pretty much the same with my small test app before background open the app click on profile click on the back button the app goes back to users after the app is reloaded from background click on profile click on the back button the app closes expected behavior i d expect the app to goes back to the main menu actual behavior the app close and never goes back to the main menu reproducible demo and import react from react import stylesheet text view from react native import createstacknavigator createbottomtabnavigator from react navigation class users extends react component render return users class profile extends react component render return profile const home createbottomtabnavigator users profile const mainnavigator createstacknavigator home export default mainnavigator stuff i already tried i tried a lot of stuff x pretty much all combinations of navigators setting the backhandler event by hand to make it always return true setting the backhandler event in a setinterval and make it always return true so i m sure i wasn t hijacked by another library of course react navigation stops working with this solution it was just to investigate make the backhandler event of react navigation always return true i tried react navigation and i tried with expo and expo before the application is backgrounded the backhandler events are fired and i can successfully prevent the app from closing at all it s not the behavior i intended at first but i was pretty desperate however if the application reloads after being backgrounded the backhandler doesn t seem to fire at all and the app always close whatever i do afaik all people who tested the app had the same behavior i also can reproduce this issue in an emulator also the navigationplayground on the playstore works great so i m really confused c please let me know if you need more explanations or tests from me thanks a lot for your great work
1
72,315
8,721,550,640
IssuesEvent
2018-12-09 00:36:29
cl8n/spotlights-application
https://api.github.com/repos/cl8n/spotlights-application
closed
Questions should be individually styled
design
For example, the beatmaps question should have a larger text box
1.0
Questions should be individually styled - For example, the beatmaps question should have a larger text box
non_main
questions should be individually styled for example the beatmaps question should have a larger text box
0
333,588
24,381,345,618
IssuesEvent
2022-10-04 08:06:34
openssl/openssl
https://api.github.com/repos/openssl/openssl
closed
Bug in man page server example for BIO_f_ssl
good first issue help wanted triaged: documentation
The master man page for BIO_f_ssl contains a server example which has a bug. After setting up the server to listen with BIO_do_accept() it immediately moves on to attempt a handshake instead of calling BIO_do_accept() again to await a connection. [manmaster/man3/BIO_f_ssl.html](https://www.openssl.org/docs/manmaster/man3/BIO_f_ssl.html)
1.0
Bug in man page server example for BIO_f_ssl - The master man page for BIO_f_ssl contains a server example which has a bug. After setting up the server to listen with BIO_do_accept() it immediately moves on to attempt a handshake instead of calling BIO_do_accept() again to await a connection. [manmaster/man3/BIO_f_ssl.html](https://www.openssl.org/docs/manmaster/man3/BIO_f_ssl.html)
non_main
bug in man page server example for bio f ssl the master man page for bio f ssl contains a server example which has a bug after setting up the server to listen with bio do accept it immediately moves on to attempt a handshake instead of calling bio do accept again to await a connection
0
54,564
13,912,443,821
IssuesEvent
2020-10-20 18:52:50
jgeraigery/LocalCatalogManager
https://api.github.com/repos/jgeraigery/LocalCatalogManager
closed
CVE-2018-14718 (High) detected in jackson-databind-2.8.5.jar - autoclosed
security vulnerability
## CVE-2018-14718 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.5.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: LocalCatalogManager/lcm-server/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.5.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/LocalCatalogManager/commit/b8c24e199f2d440dea3ce3cc2c66ada102d5d922">b8c24e199f2d440dea3ce3cc2c66ada102d5d922</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization. <p>Publish Date: 2019-01-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718>CVE-2018-14718</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14718">https://nvd.nist.gov/vuln/detail/CVE-2018-14718</a></p> <p>Release Date: 2019-01-02</p> <p>Fix Resolution: 2.9.7</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.5","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.7"}],"vulnerabilityIdentifier":"CVE-2018-14718","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-14718 (High) detected in jackson-databind-2.8.5.jar - autoclosed - ## CVE-2018-14718 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.5.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: LocalCatalogManager/lcm-server/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.5.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/LocalCatalogManager/commit/b8c24e199f2d440dea3ce3cc2c66ada102d5d922">b8c24e199f2d440dea3ce3cc2c66ada102d5d922</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization. <p>Publish Date: 2019-01-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718>CVE-2018-14718</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14718">https://nvd.nist.gov/vuln/detail/CVE-2018-14718</a></p> <p>Release Date: 2019-01-02</p> <p>Fix Resolution: 2.9.7</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.5","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.7"}],"vulnerabilityIdentifier":"CVE-2018-14718","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.7 might allow remote attackers to execute arbitrary code by leveraging failure to block the slf4j-ext class from polymorphic deserialization.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14718","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_main
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file localcatalogmanager lcm server pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before might allow remote attackers to execute arbitrary code by leveraging failure to block the ext class from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before might allow remote attackers to execute arbitrary code by leveraging failure to block the ext class from polymorphic deserialization vulnerabilityurl
0
2,672
9,198,632,032
IssuesEvent
2019-03-07 13:09:11
Chromeroni/Hera-Chatbot
https://api.github.com/repos/Chromeroni/Hera-Chatbot
opened
Migration to Discord4Java V3.0
maintainability
**Describe the current situation / your motivation for the change** Hera currently uses Discord4Java V2.x. Just recently the new version of Discord4Java released and with it come major changes which should make us consider to migrate. **Describe the solution you'd like** I'd like to discuss and review the changes made in Discord4Java V3.0 and decide if we're going to migrate over to it. If yes, I'd like to define the boundaries of this change too (approximate effort needed, time constraints, etc.). **Additional context** Link to Discord4Java V3.0: https://github.com/Discord4J/Discord4J/releases/tag/3.0.0
True
Migration to Discord4Java V3.0 - **Describe the current situation / your motivation for the change** Hera currently uses Discord4Java V2.x. Just recently the new version of Discord4Java released and with it come major changes which should make us consider to migrate. **Describe the solution you'd like** I'd like to discuss and review the changes made in Discord4Java V3.0 and decide if we're going to migrate over to it. If yes, I'd like to define the boundaries of this change too (approximate effort needed, time constraints, etc.). **Additional context** Link to Discord4Java V3.0: https://github.com/Discord4J/Discord4J/releases/tag/3.0.0
main
migration to describe the current situation your motivation for the change hera currently uses x just recently the new version of released and with it come major changes which should make us consider to migrate describe the solution you d like i d like to discuss and review the changes made in and decide if we re going to migrate over to it if yes i d like to define the boundaries of this change too approximate effort needed time constraints etc additional context link to
1
648
4,160,363,533
IssuesEvent
2016-06-17 13:01:46
antigenomics/vdjdb-db
https://api.github.com/repos/antigenomics/vdjdb-db
opened
add scoring for papers with putative MHC-restriction
maintainance
add rule for scoring of CDR from papers where not possible to determine MHC-restriction for antigen explicitly. It's only about such papers: - antigen-loaded-target or antigen-expressing-target isolation/verification - detection of clones on the basis of IFNg or TNF release - antigen-presenting cells should be positive for putative MHC-allele presenting the antigen - antigen peptide should be from "commonly used" pool: like NLV peptide E.g. clonotypes reactive against A*02-positive cells loaded by NLV peptide Suggestion: add phrase "putative MHC-restriction" to method.isolation for such seqs add putative MHC allele to mhc.a/mhc.b column(s)
True
add scoring for papers with putative MHC-restriction - add rule for scoring of CDR from papers where not possible to determine MHC-restriction for antigen explicitly. It's only about such papers: - antigen-loaded-target or antigen-expressing-target isolation/verification - detection of clones on the basis of IFNg or TNF release - antigen-presenting cells should be positive for putative MHC-allele presenting the antigen - antigen peptide should be from "commonly used" pool: like NLV peptide E.g. clonotypes reactive against A*02-positive cells loaded by NLV peptide Suggestion: add phrase "putative MHC-restriction" to method.isolation for such seqs add putative MHC allele to mhc.a/mhc.b column(s)
main
add scoring for papers with putative mhc restriction add rule for scoring of cdr from papers where not possible to determine mhc restriction for antigen explicitly it s only about such papers antigen loaded target or antigen expressing target isolation verification detection of clones on the basis of ifng or tnf release antigen presenting cells should be positive for putative mhc allele presenting the antigen antigen peptide should be from commonly used pool like nlv peptide e g clonotypes reactive against a positive cells loaded by nlv peptide suggestion add phrase putative mhc restriction to method isolation for such seqs add putative mhc allele to mhc a mhc b column s
1
2,039
6,884,403,361
IssuesEvent
2017-11-21 12:55:34
sapcc/openstack-audit-middleware
https://api.github.com/repos/sapcc/openstack-audit-middleware
closed
remove dependency to keystonemiddleware internals
maintainability
remove the dependency to the _common package which is used to read the applications oslo messaging configuration ``` from keystonemiddleware._common import config ```
True
remove dependency to keystonemiddleware internals - remove the dependency to the _common package which is used to read the applications oslo messaging configuration ``` from keystonemiddleware._common import config ```
main
remove dependency to keystonemiddleware internals remove the dependency to the common package which is used to read the applications oslo messaging configuration from keystonemiddleware common import config
1
2,712
9,531,849,505
IssuesEvent
2019-04-29 17:01:47
codestation/qcma
https://api.github.com/repos/codestation/qcma
closed
Question: Raspberry Pi debs from 0.3.13
unmaintained
I have been trying to locate the debs you created for Raspberry Pi and VitaMTP but everywhere leads back to this link, which seems to be dead. http://codestation.nekmo.com/qcma/0.3.13/raspbian/ I am a beginner, so trying to compile the latest version for Pi has proved to be a more difficult task than expected. Do you still have these old versions somewhere? I plan to continue trying to compile it myself, but I would also like a working version in the meantime.
True
Question: Raspberry Pi debs from 0.3.13 - I have been trying to locate the debs you created for Raspberry Pi and VitaMTP but everywhere leads back to this link, which seems to be dead. http://codestation.nekmo.com/qcma/0.3.13/raspbian/ I am a beginner, so trying to compile the latest version for Pi has proved to be a more difficult task than expected. Do you still have these old versions somewhere? I plan to continue trying to compile it myself, but I would also like a working version in the meantime.
main
question raspberry pi debs from i have been trying to locate the debs you created for raspberry pi and vitamtp but everywhere leads back to this link which seems to be dead i am a beginner so trying to compile the latest version for pi has proved to be a more difficult task than expected do you still have these old versions somewhere i plan to continue trying to compile it myself but i would also like a working version in the meantime
1
983
4,750,329,485
IssuesEvent
2016-10-22 09:05:55
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Gem module installs executables in directory different from gem executable.
affects_2.0 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> gem ##### ANSIBLE VERSION ``` <!--- Paste verbatim output from “ansible --version” between quotes --> ansible 2.0.1.0``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Running Ansible on Mac OS, remote host is Ubuntu 12.04 ##### SUMMARY <!--- Explain the problem briefly --> When installing a gem as root, the gem executable (`jekyll` in my case) goes into `/root/.gem/ruby/2.3.0/bin`, and is therefore not in the PATH unless the path is specifically configured to include it. In contrast, running `gem install` on the command line puts the executable in `/usr/local/bin` which is already in the PATH. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> I used this code to use the gem module: ``` gem: name=jekyll state=latest ``` ...and this to install from the command line: ``` gem install jekyll ``` <!--- Paste example playbooks or commands between quotes --> <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` <!--- Paste verbatim command output between quotes --> ```
True
Gem module installs executables in directory different from gem executable. - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> gem ##### ANSIBLE VERSION ``` <!--- Paste verbatim output from “ansible --version” between quotes --> ansible 2.0.1.0``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Running Ansible on Mac OS, remote host is Ubuntu 12.04 ##### SUMMARY <!--- Explain the problem briefly --> When installing a gem as root, the gem executable (`jekyll` in my case) goes into `/root/.gem/ruby/2.3.0/bin`, and is therefore not in the PATH unless the path is specifically configured to include it. In contrast, running `gem install` on the command line puts the executable in `/usr/local/bin` which is already in the PATH. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> I used this code to use the gem module: ``` gem: name=jekyll state=latest ``` ...and this to install from the command line: ``` gem install jekyll ``` <!--- Paste example playbooks or commands between quotes --> <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` <!--- Paste verbatim command output between quotes --> ```
main
gem module installs executables in directory different from gem executable issue type bug report component name gem ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running ansible on mac os remote host is ubuntu summary when installing a gem as root the gem executable jekyll in my case goes into root gem ruby bin and is therefore not in the path unless the path is specifically configured to include it in contrast running gem install on the command line puts the executable in usr local bin which is already in the path steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i used this code to use the gem module gem name jekyll state latest and this to install from the command line gem install jekyll expected results actual results
1
4,537
23,617,867,649
IssuesEvent
2022-08-24 17:31:17
deislabs/spiderlightning
https://api.github.com/repos/deislabs/spiderlightning
closed
align our interfaces with dapr
⭐ epic 🚧 maintainer issue
**Describe the solution you'd like** We need to align our interfaces with what dapr provides. **Additional context** n/a
True
align our interfaces with dapr - **Describe the solution you'd like** We need to align our interfaces with what dapr provides. **Additional context** n/a
main
align our interfaces with dapr describe the solution you d like we need to align our interfaces with what dapr provides additional context n a
1
3,268
12,465,112,513
IssuesEvent
2020-05-28 13:34:20
jailmanager/jailman
https://api.github.com/repos/jailmanager/jailman
closed
Add Bazarr (subtitle-grabber)
Feature No-Active-Maintainer
It's basically a background subtitle-grabber for anything that sonarr and radarr have indexed. https://github.com/morpheus65535/bazarr :)
True
Add Bazarr (subtitle-grabber) - It's basically a background subtitle-grabber for anything that sonarr and radarr have indexed. https://github.com/morpheus65535/bazarr :)
main
add bazarr subtitle grabber it s basically a background subtitle grabber for anything that sonarr and radarr have indexed
1
2,893
10,319,647,673
IssuesEvent
2019-08-30 18:07:14
backdrop-ops/contrib
https://api.github.com/repos/backdrop-ops/contrib
closed
Porting Amount module
Maintainer application
I am porting the Amount module to Backdrop. [Link to issue](https://www.drupal.org/node/2874301). Consider this issue my application to join the Backdrop Contrib group.
True
Porting Amount module - I am porting the Amount module to Backdrop. [Link to issue](https://www.drupal.org/node/2874301). Consider this issue my application to join the Backdrop Contrib group.
main
porting amount module i am porting the amount module to backdrop consider this issue my application to join the backdrop contrib group
1
5,356
26,967,911,711
IssuesEvent
2023-02-09 00:39:31
viperproject/VerifiedSCION
https://api.github.com/repos/viperproject/VerifiedSCION
closed
The CI does not update the gobra-cache on cache-hits
wontfix maintainability CI
Unfortunately, there is no way in Gobra to force an update of the cache when there are cache hits. This is a known issue of the `cache` github action for which there has been a pending PR for a long time (https://github.com/actions/cache/pull/498#issuecomment-753804797).
True
The CI does not update the gobra-cache on cache-hits - Unfortunately, there is no way in Gobra to force an update of the cache when there are cache hits. This is a known issue of the `cache` github action for which there has been a pending PR for a long time (https://github.com/actions/cache/pull/498#issuecomment-753804797).
main
the ci does not update the gobra cache on cache hits unfortunately there is no way in gobra to force an update of the cache when there are cache hits this is a known issue of the cache github action for which there has been a pending pr for a long time
1
2,078
7,045,382,870
IssuesEvent
2018-01-01 18:57:44
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
`brew cask upgrade` breaks Launchpad shortcuts
awaiting maintainer feedback
#### General troubleshooting steps - [x] None of the templates was appropriate for my issue, or I’m not sure. - [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue The new function `brew cask upgrade` breaks the icon/link in Launchpad for the upgraded app(s). The icon becomes a question mark and the shortcut stops functioning; sometimes a new icon is placed in Launchpad for the updated app at the first open slot in addition to the broken icon. The broken icons might go away when rebooting, I'm not sure the exact behavior yet. This was never a problem with the unsupported old method I used before, `brew cask install --force <app_to_update>`, which seems to simply overwrites the old app in place. Is there a way to fix this with the new behavior? ![screen shot 2017-12-19 at 1 19 23 pm](https://user-images.githubusercontent.com/4229542/34177217-131cfa6a-e4c0-11e7-9634-9c6c77f235cd.png)
True
`brew cask upgrade` breaks Launchpad shortcuts - #### General troubleshooting steps - [x] None of the templates was appropriate for my issue, or I’m not sure. - [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue The new function `brew cask upgrade` breaks the icon/link in Launchpad for the upgraded app(s). The icon becomes a question mark and the shortcut stops functioning; sometimes a new icon is placed in Launchpad for the updated app at the first open slot in addition to the broken icon. The broken icons might go away when rebooting, I'm not sure the exact behavior yet. This was never a problem with the unsupported old method I used before, `brew cask install --force <app_to_update>`, which seems to simply overwrites the old app in place. Is there a way to fix this with the new behavior? ![screen shot 2017-12-19 at 1 19 23 pm](https://user-images.githubusercontent.com/4229542/34177217-131cfa6a-e4c0-11e7-9634-9c6c77f235cd.png)
main
brew cask upgrade breaks launchpad shortcuts general troubleshooting steps none of the templates was appropriate for my issue or i’m not sure i understand that description of issue the new function brew cask upgrade breaks the icon link in launchpad for the upgraded app s the icon becomes a question mark and the shortcut stops functioning sometimes a new icon is placed in launchpad for the updated app at the first open slot in addition to the broken icon the broken icons might go away when rebooting i m not sure the exact behavior yet this was never a problem with the unsupported old method i used before brew cask install force which seems to simply overwrites the old app in place is there a way to fix this with the new behavior
1
5,044
25,849,304,148
IssuesEvent
2022-12-13 09:16:31
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Enforce metadata and value type of LogAppendEntry
kind/toil area/maintainability
**Description** As a follow up to #11115, we would like to enforce writing `RecordMetadata` and `UnifiedRecordValue` to the logstreams. The logstreams is currently set up to allow writing any sequence of bytes for these two fields, but every single consumer expects to read back `RecordMetadata` and `UnifiedRecordValue`. This simply creates a potential for errors at the moment, and this flexibility is only used in tests. As part of this issue, we should: - [ ] Enforce that every entry _has_ metadata (right now it's optional) - [ ] Enforce that the metadata has a type `RecordMetadata` - [ ] Enforce that the value has a type `UnifiedRecordValue` - [ ] Remove the `MutableLogAppendEntry` in tests FYI, that last point is likely to make this issue very time consuming.
True
Enforce metadata and value type of LogAppendEntry - **Description** As a follow up to #11115, we would like to enforce writing `RecordMetadata` and `UnifiedRecordValue` to the logstreams. The logstreams is currently set up to allow writing any sequence of bytes for these two fields, but every single consumer expects to read back `RecordMetadata` and `UnifiedRecordValue`. This simply creates a potential for errors at the moment, and this flexibility is only used in tests. As part of this issue, we should: - [ ] Enforce that every entry _has_ metadata (right now it's optional) - [ ] Enforce that the metadata has a type `RecordMetadata` - [ ] Enforce that the value has a type `UnifiedRecordValue` - [ ] Remove the `MutableLogAppendEntry` in tests FYI, that last point is likely to make this issue very time consuming.
main
enforce metadata and value type of logappendentry description as a follow up to we would like to enforce writing recordmetadata and unifiedrecordvalue to the logstreams the logstreams is currently set up to allow writing any sequence of bytes for these two fields but every single consumer expects to read back recordmetadata and unifiedrecordvalue this simply creates a potential for errors at the moment and this flexibility is only used in tests as part of this issue we should enforce that every entry has metadata right now it s optional enforce that the metadata has a type recordmetadata enforce that the value has a type unifiedrecordvalue remove the mutablelogappendentry in tests fyi that last point is likely to make this issue very time consuming
1
4,751
24,509,427,320
IssuesEvent
2022-10-10 19:45:18
web3phl/bio
https://api.github.com/repos/web3phl/bio
closed
migrate the project to bio page
chore maintainers only hacktoberfest
This project will be migrated to the bio page since new homepage repo will be created. 🤝
True
migrate the project to bio page - This project will be migrated to the bio page since new homepage repo will be created. 🤝
main
migrate the project to bio page this project will be migrated to the bio page since new homepage repo will be created 🤝
1
283,843
21,334,860,577
IssuesEvent
2022-04-18 13:25:42
jcubic/jquery.terminal
https://api.github.com/repos/jcubic/jquery.terminal
reopened
Multi-word commands
question documentation
### I have question related to jQuery Terminal <!-- add your question here --> How do you make commands multiple words? I'm trying to make a joke "sudo rm -rf" command but when I try it, it only says "command sudo not fount"
1.0
Multi-word commands - ### I have question related to jQuery Terminal <!-- add your question here --> How do you make commands multiple words? I'm trying to make a joke "sudo rm -rf" command but when I try it, it only says "command sudo not fount"
non_main
multi word commands i have question related to jquery terminal how do you make commands multiple words i m trying to make a joke sudo rm rf command but when i try it it only says command sudo not fount
0
3,675
15,036,083,685
IssuesEvent
2021-02-02 14:52:09
IITIDIDX597/sp_2021_team1
https://api.github.com/repos/IITIDIDX597/sp_2021_team1
opened
Tagging dates to articles
Epic: 5 Maintaining the system Story Week 3
**Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care. **Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform. **Sub-Hill Statements:** 1. The learning platform will be routinely updated with S Lab's own research advancements, as well as outside discoveries and best practices developed for rehabilitation treatments. ### **Story Details:** As an: administrator I want: to be able to add the date of the published article So that: clinicians can filter it according to the date
True
Tagging dates to articles - **Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care. **Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform. **Sub-Hill Statements:** 1. The learning platform will be routinely updated with S Lab's own research advancements, as well as outside discoveries and best practices developed for rehabilitation treatments. ### **Story Details:** As an: administrator I want: to be able to add the date of the published article So that: clinicians can filter it according to the date
main
tagging dates to articles project goal s lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way while at the same time foster deeper learning experiences in order to deliver better abilitylab patient care hill statement individual clinicians can reference relevant continuously evolving information for their patient s therapy needs to self manage their approach patient care plan development in a single platform sub hill statements the learning platform will be routinely updated with s lab s own research advancements as well as outside discoveries and best practices developed for rehabilitation treatments story details as an administrator i want to be able to add the date of the published article so that clinicians can filter it according to the date
1
153,922
13,530,712,055
IssuesEvent
2020-09-15 20:22:04
fga-eps-mds/2020.1-Grupo2-wiki
https://api.github.com/repos/fga-eps-mds/2020.1-Grupo2-wiki
closed
Risk Management Plan Document
documentation eps
# Description Eu como gerente gostaria do documento de gerenciamento de riscos para saber como lidar com situações adversas durante o ciclo de vida do produto.
1.0
Risk Management Plan Document - # Description Eu como gerente gostaria do documento de gerenciamento de riscos para saber como lidar com situações adversas durante o ciclo de vida do produto.
non_main
risk management plan document description eu como gerente gostaria do documento de gerenciamento de riscos para saber como lidar com situações adversas durante o ciclo de vida do produto
0
64,643
18,781,064,713
IssuesEvent
2021-11-08 06:44:20
garglk/garglk
https://api.github.com/repos/garglk/garglk
closed
2011.1 for Mac selects wrong font
Type-Defect auto-migrated Priority-High
``` What steps will reproduce the problem? 1. The problem is intermittent, I haven't found a trigger. 2. Open up any glulx game (other game types are probably also affected, but I haven't tested) What is the expected output? What do you see instead? The main text displayed should be Linux Libertine, a serif font. Instead, I see a larger, chunkier, sanserif bold italic. (Screenshot attached.) The fixed width font (status window) is unaffected. What version of the product are you using? On what operating system? 2010.1, Mac Please provide any additional information below. Restarting the system or reinstalling Gargoyle will cause the font to display correctly; simply restarting the story file or the Gargoyle application does not. This issue has occurred perhaps 3-4 times for me in the last 2 months or so. The cause is unclear. ``` Original issue reported on code.google.com by `Ek.Temple@gmail.com` on 14 Apr 2011 at 2:28 Attachments: - [Screenshot.png](https://storage.googleapis.com/google-code-attachments/garglk/issue-146/comment-0/Screenshot.png)
1.0
2011.1 for Mac selects wrong font - ``` What steps will reproduce the problem? 1. The problem is intermittent, I haven't found a trigger. 2. Open up any glulx game (other game types are probably also affected, but I haven't tested) What is the expected output? What do you see instead? The main text displayed should be Linux Libertine, a serif font. Instead, I see a larger, chunkier, sanserif bold italic. (Screenshot attached.) The fixed width font (status window) is unaffected. What version of the product are you using? On what operating system? 2010.1, Mac Please provide any additional information below. Restarting the system or reinstalling Gargoyle will cause the font to display correctly; simply restarting the story file or the Gargoyle application does not. This issue has occurred perhaps 3-4 times for me in the last 2 months or so. The cause is unclear. ``` Original issue reported on code.google.com by `Ek.Temple@gmail.com` on 14 Apr 2011 at 2:28 Attachments: - [Screenshot.png](https://storage.googleapis.com/google-code-attachments/garglk/issue-146/comment-0/Screenshot.png)
non_main
for mac selects wrong font what steps will reproduce the problem the problem is intermittent i haven t found a trigger open up any glulx game other game types are probably also affected but i haven t tested what is the expected output what do you see instead the main text displayed should be linux libertine a serif font instead i see a larger chunkier sanserif bold italic screenshot attached the fixed width font status window is unaffected what version of the product are you using on what operating system mac please provide any additional information below restarting the system or reinstalling gargoyle will cause the font to display correctly simply restarting the story file or the gargoyle application does not this issue has occurred perhaps times for me in the last months or so the cause is unclear original issue reported on code google com by ek temple gmail com on apr at attachments
0
102,006
4,149,570,145
IssuesEvent
2016-06-15 14:50:40
jpppina/migracion-galeno-art-forms11g
https://api.github.com/repos/jpppina/migracion-galeno-art-forms11g
opened
CIA – ARHP024 – Diferencia ítems del botón Ayuda
Aplicación-CIA Error Priority-Low
Ambiente: CIA Usuario: LEMCKED/a1234567 Opción de menú: Parámetros del Sistema / Parámetros Semáforo Form: ARHP024 Error: • Al posicionarme en cualquier campo y presionar el botón Ayuda, la cantidad de ítem visualizados en la ayuda es mayor en la aplicación Web. Además las pantallas son distintas. Adjunto imagen comparativa. • En la aplicación 6i al presionar F1 aparece la pantalla de ayuda en cambio en la aplicación Web no. Observaciones: --
1.0
CIA – ARHP024 – Diferencia ítems del botón Ayuda - Ambiente: CIA Usuario: LEMCKED/a1234567 Opción de menú: Parámetros del Sistema / Parámetros Semáforo Form: ARHP024 Error: • Al posicionarme en cualquier campo y presionar el botón Ayuda, la cantidad de ítem visualizados en la ayuda es mayor en la aplicación Web. Además las pantallas son distintas. Adjunto imagen comparativa. • En la aplicación 6i al presionar F1 aparece la pantalla de ayuda en cambio en la aplicación Web no. Observaciones: --
non_main
cia – – diferencia ítems del botón ayuda ambiente cia usuario lemcked opción de menú parámetros del sistema parámetros semáforo form error • al posicionarme en cualquier campo y presionar el botón ayuda la cantidad de ítem visualizados en la ayuda es mayor en la aplicación web además las pantallas son distintas adjunto imagen comparativa • en la aplicación al presionar aparece la pantalla de ayuda en cambio en la aplicación web no observaciones
0
3,786
16,037,518,340
IssuesEvent
2021-04-22 00:42:31
chocolatey-community/chocolatey-package-requests
https://api.github.com/repos/chocolatey-community/chocolatey-package-requests
closed
RFP - obsidian
Blocked Upstream Status: Available For Maintainer(s)
## Checklist - [x] The package I am requesting does not already exist on https://chocolatey.org/packages; - [x] There is no open issue for this package; - [x] The issue title starts with 'RFP - '; - [x] The download URL is public and not locked behind a paywall / login; ## Package Details Software project URL : https://obsidian.md/ Direct download URL for the software / installer : https://github.com/obsidianmd/obsidian-releases/releases/download/v0.9.2/Obsidian.0.9.2.exe Software summary / short description: Obsidian is a powerful knowledge base that works on top of a local folder of plain text Markdown files.
True
RFP - obsidian - ## Checklist - [x] The package I am requesting does not already exist on https://chocolatey.org/packages; - [x] There is no open issue for this package; - [x] The issue title starts with 'RFP - '; - [x] The download URL is public and not locked behind a paywall / login; ## Package Details Software project URL : https://obsidian.md/ Direct download URL for the software / installer : https://github.com/obsidianmd/obsidian-releases/releases/download/v0.9.2/Obsidian.0.9.2.exe Software summary / short description: Obsidian is a powerful knowledge base that works on top of a local folder of plain text Markdown files.
main
rfp obsidian checklist the package i am requesting does not already exist on there is no open issue for this package the issue title starts with rfp the download url is public and not locked behind a paywall login package details software project url direct download url for the software installer software summary short description obsidian is a powerful knowledge base that works on top of a local folder of plain text markdown files
1