Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
2,153
2,586,789,225
IssuesEvent
2015-02-17 14:35:07
bootcards/xcomponents
https://api.github.com/repos/bootcards/xcomponents
opened
[Dashboard / Chart ] - The "Closed sales by team member " value is not calculated / displayed correctly.
bug question testlio
**Environment:** Version : XComponents - version 0.1 Device : Microsoft IE 11 Windows 8.1 64 bit **Steps to reproduce:** 1. Open the dash board - http://demo.xcomponents.org:3000/sampler/index.html 2. Verify the "Closed sales by team member" field value. **Expected result:** The "Closed sales by team member" is displayed correctly. **Actual result:** The "Closed sales by team member" is displayed in $ and in s and it's value is 000. Is this the correct behavior. ? Note : This bug is also reproducible if the user clicks on the Show Data / Show Chart button.<p><img src="https://testlio.s3.amazonaws.com/issue/25262/a/16817-medium.jpg"/><br><img src="https://testlio.s3.amazonaws.com/issue/25262/a/16818-medium.jpg"/><br></p>
1.0
[Dashboard / Chart ] - The "Closed sales by team member " value is not calculated / displayed correctly. - **Environment:** Version : XComponents - version 0.1 Device : Microsoft IE 11 Windows 8.1 64 bit **Steps to reproduce:** 1. Open the dash board - http://demo.xcomponents.org:3000/sampler/index.html 2. Verify the "Closed sales by team member" field value. **Expected result:** The "Closed sales by team member" is displayed correctly. **Actual result:** The "Closed sales by team member" is displayed in $ and in s and it's value is 000. Is this the correct behavior. ? Note : This bug is also reproducible if the user clicks on the Show Data / Show Chart button.<p><img src="https://testlio.s3.amazonaws.com/issue/25262/a/16817-medium.jpg"/><br><img src="https://testlio.s3.amazonaws.com/issue/25262/a/16818-medium.jpg"/><br></p>
non_main
the closed sales by team member value is not calculated displayed correctly environment version xcomponents version device microsoft ie windows bit steps to reproduce open the dash board verify the closed sales by team member field value expected result the closed sales by team member is displayed correctly actual result the closed sales by team member is displayed in and in s and it s value is is this the correct behavior note this bug is also reproducible if the user clicks on the show data show chart button img src src
0
4,721
24,351,863,445
IssuesEvent
2022-10-03 01:26:51
gemarkode/hacktoberfest
https://api.github.com/repos/gemarkode/hacktoberfest
opened
Q & A
question maintainers
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
True
Q & A - **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
main
q a is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
1
368,870
25,811,763,978
IssuesEvent
2022-12-11 22:43:45
Michele-Alberti/data-lunch
https://api.github.com/repos/Michele-Alberti/data-lunch
opened
๐Ÿ“š Update docs
documentation
**Is your feature request related to a problem? Please describe.** Project info are available only in `README.md`. **Describe the solution you'd like** - Update the existing `README.md`. - Add _Contribution Guidelines_ and other docs as suggested by **recommended community standards**.
1.0
๐Ÿ“š Update docs - **Is your feature request related to a problem? Please describe.** Project info are available only in `README.md`. **Describe the solution you'd like** - Update the existing `README.md`. - Add _Contribution Guidelines_ and other docs as suggested by **recommended community standards**.
non_main
๐Ÿ“š update docs is your feature request related to a problem please describe project info are available only in readme md describe the solution you d like update the existing readme md add contribution guidelines and other docs as suggested by recommended community standards
0
3,354
13,018,016,118
IssuesEvent
2020-07-26 15:19:19
RapidField/solid-instruments
https://api.github.com/repos/RapidField/solid-instruments
closed
Add messaging support for RabbitMQ.
Category-Feature Source-Maintainer Stage-3-InProgress Subcategory-Functionality Subsystem-Messaging Tag-AddReleaseNote Verdict-Released Version-1.0.25 WindowForDelivery-2020-Q4
# Feature Request This issue represents a request for new **Solid Instruments** functionality. ## Overview There is currently no native support for **RabbitMQ** using the messaging abstractions. A new library should be created which implements the messaging abstractions for **RabbitMQ**. ## Statement of work The following list describes the work to be done and defines acceptance criteria for the feature. 1. A new library named `RapidField.SolidInstruments.Messaging.RabbitMq` should be exposed which draws upon the `RapidField.SolidInstruments.Messaging` abstractions. 2. The library should implement **RabbitMQ** analogs to the types exposed by `RapidField.SolidInstruments.Messaging.AzureServiceBus`. ## Revision control plan **Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue. - `master` is the pull request target for - `develop`, which is the pull request target for - `release/v1.0.25-preview1`, which is the pull request target for - `feature/0007-rabbitmq-support`, which is the pull request target for contributing user branches, which should be named using the pattern - `user/{username}/0007-rabbitmq-support`
True
Add messaging support for RabbitMQ. - # Feature Request This issue represents a request for new **Solid Instruments** functionality. ## Overview There is currently no native support for **RabbitMQ** using the messaging abstractions. A new library should be created which implements the messaging abstractions for **RabbitMQ**. ## Statement of work The following list describes the work to be done and defines acceptance criteria for the feature. 1. A new library named `RapidField.SolidInstruments.Messaging.RabbitMq` should be exposed which draws upon the `RapidField.SolidInstruments.Messaging` abstractions. 2. The library should implement **RabbitMQ** analogs to the types exposed by `RapidField.SolidInstruments.Messaging.AzureServiceBus`. ## Revision control plan **Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue. - `master` is the pull request target for - `develop`, which is the pull request target for - `release/v1.0.25-preview1`, which is the pull request target for - `feature/0007-rabbitmq-support`, which is the pull request target for contributing user branches, which should be named using the pattern - `user/{username}/0007-rabbitmq-support`
main
add messaging support for rabbitmq feature request this issue represents a request for new solid instruments functionality overview there is currently no native support for rabbitmq using the messaging abstractions a new library should be created which implements the messaging abstractions for rabbitmq statement of work the following list describes the work to be done and defines acceptance criteria for the feature a new library named rapidfield solidinstruments messaging rabbitmq should be exposed which draws upon the rapidfield solidinstruments messaging abstractions the library should implement rabbitmq analogs to the types exposed by rapidfield solidinstruments messaging azureservicebus revision control plan solid instruments uses the individual contributors should follow the branching plan below when working on this issue master is the pull request target for develop which is the pull request target for release which is the pull request target for feature rabbitmq support which is the pull request target for contributing user branches which should be named using the pattern user username rabbitmq support
1
67,384
27,824,239,695
IssuesEvent
2023-03-19 15:30:32
SyntaxErrorLineNULL/chat-service
https://api.github.com/repos/SyntaxErrorLineNULL/chat-service
closed
Create simple user service
user user-service user-repository
Create: 1. Domain { ID, first, last name, email } 2. Service 3. Repository
1.0
Create simple user service - Create: 1. Domain { ID, first, last name, email } 2. Service 3. Repository
non_main
create simple user service create domain id first last name email service repository
0
419,858
12,229,207,096
IssuesEvent
2020-05-03 23:00:21
localstack/localstack
https://api.github.com/repos/localstack/localstack
closed
CloudFormation Extended called lower function with SmoothStreaming boolean type?
PRO priority-high
<!-- Love localstack? Please consider supporting our collective: ๐Ÿ‘‰ https://opencollective.com/localstack/donate --> # Type of request: This is a ... [x] bug report [ ] feature request # Detailed description An error occurred inside cloudformtion_extended.py.enc. ``` AttributeError: 'bool' object has no attribute 'lower' in /opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py ``` I have written a json file for cloudformation and tried to run "aws cloudformation create-stack" with the template file. > Resources.CloudFront.Properties.DistributionConfig.DefaultCacheBehavior.SmoothStreaming ``` "DistributionConfig": { "DefaultCacheBehavior": { "SmoothStreaming": false, ``` It looks like **cloudformation_extended** adapted a lower function to "SmoothStreaming(boolean)" . AWS says that it's boolean. > https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cloudfront-distribution-defaultcachebehavior.html#cfn-cloudfront-distribution-defaultcachebehavior-smoothstreaming > type: Boolean Could you check this situation? Thank you. Full traceback is here. ``` localstack_1 | 2020-04-10 15:14:21,683:API: Error on request: localstack_1 | Traceback (most recent call last): localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/werkzeug/serving.py", line 323, in run_wsgi localstack_1 | execute(self.server.app) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/werkzeug/serving.py", line 312, in execute localstack_1 | application_iter = app(environ, start_response) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/server.py", line 135, in __call__ localstack_1 | return backend_app(environ, start_response) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2464, in __call__ localstack_1 | return self.wsgi_app(environ, start_response) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app localstack_1 | response = self.handle_exception(e) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 161, in wrapped_function localstack_1 | return cors_after_request(app.make_response(f(*args, **kwargs))) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception localstack_1 | reraise(exc_type, exc_value, tb) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise localstack_1 | raise value localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app localstack_1 | response = self.full_dispatch_request() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request localstack_1 | rv = self.handle_user_exception(e) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 161, in wrapped_function localstack_1 | return cors_after_request(app.make_response(f(*args, **kwargs))) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception localstack_1 | reraise(exc_type, exc_value, tb) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise localstack_1 | raise value localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request localstack_1 | rv = self.dispatch_request() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request localstack_1 | return self.view_functions[rule.endpoint](**req.view_args) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/utils.py", line 146, in __call__ localstack_1 | result = self.callback(request, request.url, {}) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 197, in dispatch localstack_1 | return cls()._dispatch(*args, **kwargs) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 295, in _dispatch localstack_1 | return self.call_action() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 380, in call_action localstack_1 | response = method() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/responses.py", line 64, in create_stack localstack_1 | stack = self.cloudformation_backend.create_stack( localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/models.py", line 564, in create_stack localstack_1 | new_stack.initialize_resources() localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 748, in initialize_resources localstack_1 | self.resource_map.create() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 561, in create localstack_1 | if isinstance(self[resource], ec2_models.TaggedEC2Resource): localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 472, in __getitem__ localstack_1 | new_resource = parse_and_create_resource( localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 185, in parse_and_create_resource localstack_1 | return _parse_and_create_resource( localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 213, in _parse_and_create_resource localstack_1 | resource_tuple = parsing.parse_resource(logical_id, resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 315, in parse_resource localstack_1 | resource_json = clean_json(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 169, in clean_json localstack_1 | result = clean_json_orig(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 260, in clean_json localstack_1 | cleaned_val = clean_json(value, resources_map) localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 169, in clean_json localstack_1 | result = clean_json_orig(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 260, in clean_json localstack_1 | cleaned_val = clean_json(value, resources_map) localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 169, in clean_json localstack_1 | result = clean_json_orig(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 260, in clean_json localstack_1 | cleaned_val = clean_json(value, resources_map) localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 169, in clean_json localstack_1 | result = clean_json_orig(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 170, in clean_json localstack_1 | resource = resources_map.get(resource_json["Fn::GetAtt"][0]) localstack_1 | File "/usr/lib/python3.8/_collections_abc.py", line 660, in get localstack_1 | return self[key] localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 472, in __getitem__ localstack_1 | new_resource = parse_and_create_resource( localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 185, in parse_and_create_resource localstack_1 | return _parse_and_create_resource( localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 294, in _parse_and_create_resource localstack_1 | result = deploy_func(logical_id, resource_map_new, stack_name=stack_name) localstack_1 | File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 961, in deploy_resource localstack_1 | return execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE) localstack_1 | File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 987, in execute_resource_action localstack_1 | result = configure_resource_via_sdk(resource_id, resources, resource_type, func, stack_name) localstack_1 | File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1015, in configure_resource_via_sdk localstack_1 | prop_value = prop_key(resource_props, stack_name=stack_name, resources=resources) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py", line 87, in lambda_get_distribution_config localstack_1 | localstack_1 | AttributeError: 'bool' object has no attribute 'lower' ``` ## Expected behavior do not call lower. ## Actual behavior AttributeError: 'bool' object has no attribute 'lower' # Steps to reproduce sample.json ``` { "AWSTemplateFormatVersion": "2010-09-09", "Description": "Cloud Front definition for sample", "Parameters": { "SystemName": { "Type": "String", "Description": "System Name" }, "Environment": { "Type": "String", "Description": "Environment Name" }, "BucketName": { "Type": "String", "Description": "Bucket name" }, "HostedZoneId": { "Type": "String", "Description": "Route53 Hosted Zone Id" }, "Cname": { "Type": "String", "Description": "Access Domain Name" }, "Contents": { "Type": "String", "Description": "Origin Path" }, "LogBucketName": { "Type": "String", "Description": "Log Bucket Name" }, "OriginAccessIdentity": { "Type": "String", "Description": "Access Identity" }, "AcmArn": { "Type": "String", "Description": "Amazon Certificate Manager ARN" } }, "Resources": { "Route53": { "Type": "AWS::Route53::RecordSet", "DependsOn": "CloudFront", "Properties": { "Name": { "Ref": "Cname" }, "Type": "A", "HostedZoneId": { "Ref": "HostedZoneId" }, "AliasTarget": { "DNSName": { "Fn::GetAtt": [ "CloudFront", "DomainName" ] }, "HostedZoneId": "SomeZoneId" } } }, "CloudFront": { "Type": "AWS::CloudFront::Distribution", "Properties": { "DistributionConfig": { "Aliases": [ { "Ref": "Cname" } ], "Origins": [ { "DomainName": { "Fn::Join": [ ".", [ { "Ref": "BucketName" }, "s3", "amazonaws", "com" ] ] }, "Id": { "Fn::Join": [ "-", [ { "Ref": "SystemName" }, { "Ref": "Environment" }, "video" ] ] }, "OriginPath": { "Fn::Join": [ "", [ "/", { "Ref": "Contents" } ] ] }, "S3OriginConfig": { "OriginAccessIdentity": { "Fn::Join": [ "/", [ "origin-access-identity", "cloudfront", { "Ref": "OriginAccessIdentity" } ] ] } } } ], "Enabled": true, "Comment": "Video CloudFront", "DefaultCacheBehavior": { "TargetOriginId": { "Fn::Join": [ "-", [ { "Ref": "SystemName" }, { "Ref": "Environment" }, "video" ] ] }, "AllowedMethods": [ "GET", "HEAD" ], "SmoothStreaming": false, "DefaultTTL": 31536000, "MinTTL": 31536000, "MaxTTL": 31536000, "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" }, "Headers": [ "Access-Control-Request-Headers", "Access-Control-Request-Method", "Origin" ] }, "ViewerProtocolPolicy": "redirect-to-https" }, "ViewerCertificate": { "AcmCertificateArn": { "Ref": "AcmArn" }, "SslSupportMethod": "sni-only", "MinimumProtocolVersion": "TLSv1.1_2016" }, "Logging": { "Bucket": { "Fn::Join": [ "", [ { "Ref": "LogBucketName" }, ".s3.amazonaws.com" ] ] }, "Prefix": "video/" } } } } }, "Outputs": { "ID": { "Value": { "Ref": "CloudFront" } }, "DomainName": { "Value": { "Fn::GetAtt": [ "CloudFront", "DomainName" ] } } } } ``` ## Command used to start LocalStack > TMPDIR=/private$TMPDIR docker-compose up ``` version: '2.1' services: localstack: image: localstack/localstack ports: - "443:443" - "4510-4520:4510-4520" - "4567-4615:4567-4615" - "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}" environment: - LOCALSTACK_API_KEY=secret - DEFAULT_REGION=ap-northeast-1 - SERVICES=rds,s3,apigateway,lambda,dynamodb,kinesis,ses,cloudformation,cloudwatch,cognito,kms,cloudfront,route53,stepfunctions,sns,logs,events - DEBUG=1 - LANG=ja_JP.UTF-8 - DOCKER_HOST=unix:///var/run/docker.sock volumes: - "${TMPDIR:-/tmp/localstack}:/tmp/localstack" - "/var/run/docker.sock:/var/run/docker.sock" ``` ## Client code (AWS SDK code snippet, or sequence of "awslocal" commands) > aws --endpoint-url=http://localhost:4581 --profile localstack cloudformation create-stack --stack-name test-cloudfront-video --template-body file://./sample.json --parameters ParameterKey=SystemName,ParameterValue="video" ParameterKey=Environment,ParameterValue="test" ParameterKey=BucketName,ParameterValue="my-test-bucket" ParameterKey=HostedZoneId,ParameterValue="SomeZoneId" ParameterKey=Cname,ParameterValue="video.test.mydomain.net" ParameterKey=Contents,ParameterValue="contents" ParameterKey=LogBucketName,ParameterValue="video-test-log" ParameterKey=OriginAccessIdentity,ParameterValue="SomeIdentity" ParameterKey=AcmArn,ParameterValue="arn:aws:acm:us-east-1:0000000000:certificate/00000000-0000-0000-0000-000000000000"
1.0
CloudFormation Extended called lower function with SmoothStreaming boolean type? - <!-- Love localstack? Please consider supporting our collective: ๐Ÿ‘‰ https://opencollective.com/localstack/donate --> # Type of request: This is a ... [x] bug report [ ] feature request # Detailed description An error occurred inside cloudformtion_extended.py.enc. ``` AttributeError: 'bool' object has no attribute 'lower' in /opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py ``` I have written a json file for cloudformation and tried to run "aws cloudformation create-stack" with the template file. > Resources.CloudFront.Properties.DistributionConfig.DefaultCacheBehavior.SmoothStreaming ``` "DistributionConfig": { "DefaultCacheBehavior": { "SmoothStreaming": false, ``` It looks like **cloudformation_extended** adapted a lower function to "SmoothStreaming(boolean)" . AWS says that it's boolean. > https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cloudfront-distribution-defaultcachebehavior.html#cfn-cloudfront-distribution-defaultcachebehavior-smoothstreaming > type: Boolean Could you check this situation? Thank you. Full traceback is here. ``` localstack_1 | 2020-04-10 15:14:21,683:API: Error on request: localstack_1 | Traceback (most recent call last): localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/werkzeug/serving.py", line 323, in run_wsgi localstack_1 | execute(self.server.app) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/werkzeug/serving.py", line 312, in execute localstack_1 | application_iter = app(environ, start_response) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/server.py", line 135, in __call__ localstack_1 | return backend_app(environ, start_response) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2464, in __call__ localstack_1 | return self.wsgi_app(environ, start_response) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app localstack_1 | response = self.handle_exception(e) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 161, in wrapped_function localstack_1 | return cors_after_request(app.make_response(f(*args, **kwargs))) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception localstack_1 | reraise(exc_type, exc_value, tb) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise localstack_1 | raise value localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app localstack_1 | response = self.full_dispatch_request() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request localstack_1 | rv = self.handle_user_exception(e) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 161, in wrapped_function localstack_1 | return cors_after_request(app.make_response(f(*args, **kwargs))) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception localstack_1 | reraise(exc_type, exc_value, tb) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise localstack_1 | raise value localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request localstack_1 | rv = self.dispatch_request() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request localstack_1 | return self.view_functions[rule.endpoint](**req.view_args) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/utils.py", line 146, in __call__ localstack_1 | result = self.callback(request, request.url, {}) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 197, in dispatch localstack_1 | return cls()._dispatch(*args, **kwargs) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 295, in _dispatch localstack_1 | return self.call_action() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 380, in call_action localstack_1 | response = method() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/responses.py", line 64, in create_stack localstack_1 | stack = self.cloudformation_backend.create_stack( localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/models.py", line 564, in create_stack localstack_1 | new_stack.initialize_resources() localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 748, in initialize_resources localstack_1 | self.resource_map.create() localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 561, in create localstack_1 | if isinstance(self[resource], ec2_models.TaggedEC2Resource): localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 472, in __getitem__ localstack_1 | new_resource = parse_and_create_resource( localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 185, in parse_and_create_resource localstack_1 | return _parse_and_create_resource( localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 213, in _parse_and_create_resource localstack_1 | resource_tuple = parsing.parse_resource(logical_id, resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 315, in parse_resource localstack_1 | resource_json = clean_json(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 169, in clean_json localstack_1 | result = clean_json_orig(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 260, in clean_json localstack_1 | cleaned_val = clean_json(value, resources_map) localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 169, in clean_json localstack_1 | result = clean_json_orig(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 260, in clean_json localstack_1 | cleaned_val = clean_json(value, resources_map) localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 169, in clean_json localstack_1 | result = clean_json_orig(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 260, in clean_json localstack_1 | cleaned_val = clean_json(value, resources_map) localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 169, in clean_json localstack_1 | result = clean_json_orig(resource_json, resources_map) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 170, in clean_json localstack_1 | resource = resources_map.get(resource_json["Fn::GetAtt"][0]) localstack_1 | File "/usr/lib/python3.8/_collections_abc.py", line 660, in get localstack_1 | return self[key] localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/cloudformation/parsing.py", line 472, in __getitem__ localstack_1 | new_resource = parse_and_create_resource( localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 185, in parse_and_create_resource localstack_1 | return _parse_and_create_resource( localstack_1 | File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 294, in _parse_and_create_resource localstack_1 | result = deploy_func(logical_id, resource_map_new, stack_name=stack_name) localstack_1 | File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 961, in deploy_resource localstack_1 | return execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE) localstack_1 | File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 987, in execute_resource_action localstack_1 | result = configure_resource_via_sdk(resource_id, resources, resource_type, func, stack_name) localstack_1 | File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1015, in configure_resource_via_sdk localstack_1 | prop_value = prop_key(resource_props, stack_name=stack_name, resources=resources) localstack_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py", line 87, in lambda_get_distribution_config localstack_1 | localstack_1 | AttributeError: 'bool' object has no attribute 'lower' ``` ## Expected behavior do not call lower. ## Actual behavior AttributeError: 'bool' object has no attribute 'lower' # Steps to reproduce sample.json ``` { "AWSTemplateFormatVersion": "2010-09-09", "Description": "Cloud Front definition for sample", "Parameters": { "SystemName": { "Type": "String", "Description": "System Name" }, "Environment": { "Type": "String", "Description": "Environment Name" }, "BucketName": { "Type": "String", "Description": "Bucket name" }, "HostedZoneId": { "Type": "String", "Description": "Route53 Hosted Zone Id" }, "Cname": { "Type": "String", "Description": "Access Domain Name" }, "Contents": { "Type": "String", "Description": "Origin Path" }, "LogBucketName": { "Type": "String", "Description": "Log Bucket Name" }, "OriginAccessIdentity": { "Type": "String", "Description": "Access Identity" }, "AcmArn": { "Type": "String", "Description": "Amazon Certificate Manager ARN" } }, "Resources": { "Route53": { "Type": "AWS::Route53::RecordSet", "DependsOn": "CloudFront", "Properties": { "Name": { "Ref": "Cname" }, "Type": "A", "HostedZoneId": { "Ref": "HostedZoneId" }, "AliasTarget": { "DNSName": { "Fn::GetAtt": [ "CloudFront", "DomainName" ] }, "HostedZoneId": "SomeZoneId" } } }, "CloudFront": { "Type": "AWS::CloudFront::Distribution", "Properties": { "DistributionConfig": { "Aliases": [ { "Ref": "Cname" } ], "Origins": [ { "DomainName": { "Fn::Join": [ ".", [ { "Ref": "BucketName" }, "s3", "amazonaws", "com" ] ] }, "Id": { "Fn::Join": [ "-", [ { "Ref": "SystemName" }, { "Ref": "Environment" }, "video" ] ] }, "OriginPath": { "Fn::Join": [ "", [ "/", { "Ref": "Contents" } ] ] }, "S3OriginConfig": { "OriginAccessIdentity": { "Fn::Join": [ "/", [ "origin-access-identity", "cloudfront", { "Ref": "OriginAccessIdentity" } ] ] } } } ], "Enabled": true, "Comment": "Video CloudFront", "DefaultCacheBehavior": { "TargetOriginId": { "Fn::Join": [ "-", [ { "Ref": "SystemName" }, { "Ref": "Environment" }, "video" ] ] }, "AllowedMethods": [ "GET", "HEAD" ], "SmoothStreaming": false, "DefaultTTL": 31536000, "MinTTL": 31536000, "MaxTTL": 31536000, "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" }, "Headers": [ "Access-Control-Request-Headers", "Access-Control-Request-Method", "Origin" ] }, "ViewerProtocolPolicy": "redirect-to-https" }, "ViewerCertificate": { "AcmCertificateArn": { "Ref": "AcmArn" }, "SslSupportMethod": "sni-only", "MinimumProtocolVersion": "TLSv1.1_2016" }, "Logging": { "Bucket": { "Fn::Join": [ "", [ { "Ref": "LogBucketName" }, ".s3.amazonaws.com" ] ] }, "Prefix": "video/" } } } } }, "Outputs": { "ID": { "Value": { "Ref": "CloudFront" } }, "DomainName": { "Value": { "Fn::GetAtt": [ "CloudFront", "DomainName" ] } } } } ``` ## Command used to start LocalStack > TMPDIR=/private$TMPDIR docker-compose up ``` version: '2.1' services: localstack: image: localstack/localstack ports: - "443:443" - "4510-4520:4510-4520" - "4567-4615:4567-4615" - "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}" environment: - LOCALSTACK_API_KEY=secret - DEFAULT_REGION=ap-northeast-1 - SERVICES=rds,s3,apigateway,lambda,dynamodb,kinesis,ses,cloudformation,cloudwatch,cognito,kms,cloudfront,route53,stepfunctions,sns,logs,events - DEBUG=1 - LANG=ja_JP.UTF-8 - DOCKER_HOST=unix:///var/run/docker.sock volumes: - "${TMPDIR:-/tmp/localstack}:/tmp/localstack" - "/var/run/docker.sock:/var/run/docker.sock" ``` ## Client code (AWS SDK code snippet, or sequence of "awslocal" commands) > aws --endpoint-url=http://localhost:4581 --profile localstack cloudformation create-stack --stack-name test-cloudfront-video --template-body file://./sample.json --parameters ParameterKey=SystemName,ParameterValue="video" ParameterKey=Environment,ParameterValue="test" ParameterKey=BucketName,ParameterValue="my-test-bucket" ParameterKey=HostedZoneId,ParameterValue="SomeZoneId" ParameterKey=Cname,ParameterValue="video.test.mydomain.net" ParameterKey=Contents,ParameterValue="contents" ParameterKey=LogBucketName,ParameterValue="video-test-log" ParameterKey=OriginAccessIdentity,ParameterValue="SomeIdentity" ParameterKey=AcmArn,ParameterValue="arn:aws:acm:us-east-1:0000000000:certificate/00000000-0000-0000-0000-000000000000"
non_main
cloudformation extended called lower function with smoothstreaming boolean type love localstack please consider supporting our collective ๐Ÿ‘‰ type of request this is a bug report feature request detailed description an error occurred inside cloudformtion extended py enc attributeerror bool object has no attribute lower in opt code localstack venv lib site packages localstack ext services cloudformation cloudformation extended py i have written a json file for cloudformation and tried to run aws cloudformation create stack with the template file resources cloudfront properties distributionconfig defaultcachebehavior smoothstreaming distributionconfig defaultcachebehavior smoothstreaming false it looks like cloudformation extended adapted a lower function to smoothstreaming boolean aws says that it s boolean type boolean could you check this situation thank you full traceback is here localstack api error on request localstack traceback most recent call last localstack file opt code localstack venv lib site packages werkzeug serving py line in run wsgi localstack execute self server app localstack file opt code localstack venv lib site packages werkzeug serving py line in execute localstack application iter app environ start response localstack file opt code localstack venv lib site packages moto server py line in call localstack return backend app environ start response localstack file opt code localstack venv lib site packages flask app py line in call localstack return self wsgi app environ start response localstack file opt code localstack venv lib site packages flask app py line in wsgi app localstack response self handle exception e localstack file opt code localstack venv lib site packages flask cors extension py line in wrapped function localstack return cors after request app make response f args kwargs localstack file opt code localstack venv lib site packages flask app py line in handle exception localstack reraise exc type exc value tb localstack file opt code localstack venv lib site packages flask compat py line in reraise localstack raise value localstack file opt code localstack venv lib site packages flask app py line in wsgi app localstack response self full dispatch request localstack file opt code localstack venv lib site packages flask app py line in full dispatch request localstack rv self handle user exception e localstack file opt code localstack venv lib site packages flask cors extension py line in wrapped function localstack return cors after request app make response f args kwargs localstack file opt code localstack venv lib site packages flask app py line in handle user exception localstack reraise exc type exc value tb localstack file opt code localstack venv lib site packages flask compat py line in reraise localstack raise value localstack file opt code localstack venv lib site packages flask app py line in full dispatch request localstack rv self dispatch request localstack file opt code localstack venv lib site packages flask app py line in dispatch request localstack return self view functions req view args localstack file opt code localstack venv lib site packages moto core utils py line in call localstack result self callback request request url localstack file opt code localstack venv lib site packages moto core responses py line in dispatch localstack return cls dispatch args kwargs localstack file opt code localstack venv lib site packages moto core responses py line in dispatch localstack return self call action localstack file opt code localstack venv lib site packages moto core responses py line in call action localstack response method localstack file opt code localstack venv lib site packages moto cloudformation responses py line in create stack localstack stack self cloudformation backend create stack localstack file opt code localstack venv lib site packages moto cloudformation models py line in create stack localstack new stack initialize resources localstack file opt code localstack localstack services cloudformation cloudformation starter py line in initialize resources localstack self resource map create localstack file opt code localstack venv lib site packages moto cloudformation parsing py line in create localstack if isinstance self models localstack file opt code localstack venv lib site packages moto cloudformation parsing py line in getitem localstack new resource parse and create resource localstack file opt code localstack localstack services cloudformation cloudformation starter py line in parse and create resource localstack return parse and create resource localstack file opt code localstack localstack services cloudformation cloudformation starter py line in parse and create resource localstack resource tuple parsing parse resource logical id resource json resources map localstack file opt code localstack venv lib site packages moto cloudformation parsing py line in parse resource localstack resource json clean json resource json resources map localstack file opt code localstack localstack services cloudformation cloudformation starter py line in clean json localstack result clean json orig resource json resources map localstack file opt code localstack venv lib site packages moto cloudformation parsing py line in clean json localstack cleaned val clean json value resources map localstack file opt code localstack localstack services cloudformation cloudformation starter py line in clean json localstack result clean json orig resource json resources map localstack file opt code localstack venv lib site packages moto cloudformation parsing py line in clean json localstack cleaned val clean json value resources map localstack file opt code localstack localstack services cloudformation cloudformation starter py line in clean json localstack result clean json orig resource json resources map localstack file opt code localstack venv lib site packages moto cloudformation parsing py line in clean json localstack cleaned val clean json value resources map localstack file opt code localstack localstack services cloudformation cloudformation starter py line in clean json localstack result clean json orig resource json resources map localstack file opt code localstack venv lib site packages moto cloudformation parsing py line in clean json localstack resource resources map get resource json localstack file usr lib collections abc py line in get localstack return self localstack file opt code localstack venv lib site packages moto cloudformation parsing py line in getitem localstack new resource parse and create resource localstack file opt code localstack localstack services cloudformation cloudformation starter py line in parse and create resource localstack return parse and create resource localstack file opt code localstack localstack services cloudformation cloudformation starter py line in parse and create resource localstack result deploy func logical id resource map new stack name stack name localstack file opt code localstack localstack utils cloudformation template deployer py line in deploy resource localstack return execute resource action resource id resources stack name action create localstack file opt code localstack localstack utils cloudformation template deployer py line in execute resource action localstack result configure resource via sdk resource id resources resource type func stack name localstack file opt code localstack localstack utils cloudformation template deployer py line in configure resource via sdk localstack prop value prop key resource props stack name stack name resources resources localstack file opt code localstack venv lib site packages localstack ext services cloudformation cloudformation extended py line in lambda get distribution config localstack localstack attributeerror bool object has no attribute lower expected behavior do not call lower actual behavior attributeerror bool object has no attribute lower steps to reproduce sample json awstemplateformatversion description cloud front definition for sample parameters systemname type string description system name environment type string description environment name bucketname type string description bucket name hostedzoneid type string description hosted zone id cname type string description access domain name contents type string description origin path logbucketname type string description log bucket name originaccessidentity type string description access identity acmarn type string description amazon certificate manager arn resources type aws recordset dependson cloudfront properties name ref cname type a hostedzoneid ref hostedzoneid aliastarget dnsname fn getatt hostedzoneid somezoneid cloudfront type aws cloudfront distribution properties distributionconfig aliases ref cname origins domainname fn join ref bucketname amazonaws com id fn join ref systemname ref environment video originpath fn join ref contents originaccessidentity fn join origin access identity cloudfront ref originaccessidentity enabled true comment video cloudfront defaultcachebehavior targetoriginid fn join ref systemname ref environment video allowedmethods smoothstreaming false defaultttl minttl maxttl forwardedvalues querystring false cookies forward none headers access control request headers access control request method origin viewerprotocolpolicy redirect to https viewercertificate acmcertificatearn ref acmarn sslsupportmethod sni only minimumprotocolversion logging bucket fn join ref logbucketname amazonaws com prefix video outputs id value ref cloudfront domainname value fn getatt command used to start localstack tmpdir private tmpdir docker compose up version services localstack image localstack localstack ports port web ui port web ui environment localstack api key secret default region ap northeast services rds apigateway lambda dynamodb kinesis ses cloudformation cloudwatch cognito kms cloudfront stepfunctions sns logs events debug lang ja jp utf docker host unix var run docker sock volumes tmpdir tmp localstack tmp localstack var run docker sock var run docker sock client code aws sdk code snippet or sequence of awslocal commands aws endpoint url profile localstack cloudformation create stack stack name test cloudfront video template body file sample json parameters parameterkey systemname parametervalue video parameterkey environment parametervalue test parameterkey bucketname parametervalue my test bucket parameterkey hostedzoneid parametervalue somezoneid parameterkey cname parametervalue video test mydomain net parameterkey contents parametervalue contents parameterkey logbucketname parametervalue video test log parameterkey originaccessidentity parametervalue someidentity parameterkey acmarn parametervalue arn aws acm us east certificate
0
5,205
26,450,656,273
IssuesEvent
2023-01-16 10:56:40
precice/precice
https://api.github.com/repos/precice/precice
opened
Check API symbols
maintainability
**Please describe the problem you are trying to solve.** Adding or changing the API normally needs to happen in multiple places for C++, C and Fortran. So, there is a lot that can go wrong or that can be forgotten. **Describe the solution you propose.** Create a CSV file with symbol names for each binding. ``` precice::SolverInterface::isCouplingOngoing();precicec_isCouplingOngoing;precicef_is_coupling_ongoing_ ``` Then use a test to check if they are present. CMake already offers functionality for this: * [C and Fortran](https://cmake.org/cmake/help/latest/module/CheckSymbolExists.html) * [C++](https://cmake.org/cmake/help/latest/module/CheckCXXSymbolExists.html#module:CheckCXXSymbolExists) So we could implement the test as a CMake script that checks these symbols. **Describe alternatives you've considered** Rely on future systemtests **Additional context**
True
Check API symbols - **Please describe the problem you are trying to solve.** Adding or changing the API normally needs to happen in multiple places for C++, C and Fortran. So, there is a lot that can go wrong or that can be forgotten. **Describe the solution you propose.** Create a CSV file with symbol names for each binding. ``` precice::SolverInterface::isCouplingOngoing();precicec_isCouplingOngoing;precicef_is_coupling_ongoing_ ``` Then use a test to check if they are present. CMake already offers functionality for this: * [C and Fortran](https://cmake.org/cmake/help/latest/module/CheckSymbolExists.html) * [C++](https://cmake.org/cmake/help/latest/module/CheckCXXSymbolExists.html#module:CheckCXXSymbolExists) So we could implement the test as a CMake script that checks these symbols. **Describe alternatives you've considered** Rely on future systemtests **Additional context**
main
check api symbols please describe the problem you are trying to solve adding or changing the api normally needs to happen in multiple places for c c and fortran so there is a lot that can go wrong or that can be forgotten describe the solution you propose create a csv file with symbol names for each binding precice solverinterface iscouplingongoing precicec iscouplingongoing precicef is coupling ongoing then use a test to check if they are present cmake already offers functionality for this so we could implement the test as a cmake script that checks these symbols describe alternatives you ve considered rely on future systemtests additional context
1
1,665
6,574,059,785
IssuesEvent
2017-09-11 11:18:01
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ansible-2.2.0.0 group_by intermittent bug
affects_2.2 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> group_by ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.2.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- RHEL6 --> ##### SUMMARY <!--- Explain the problem briefly --> The problem occurs on 2.2 but not on 2.1 A test case was created containing the following statement: debug: var=groups['mygroup1']|list If dynamic host group "mygroup1" created in a previous playbook by the group_by module contains four hosts, four hosts are output by the debug statement. If "mygroup1" contains eight hosts, only the first host is output by the debug statement most of the time but sometimes the second host is output. This is an intermittent problem. The problem does not occur when the host group contains 3 or 4 hosts but always occurs when the group contains 7 or 8 hosts. ##### STEPS TO REPRODUCE <!--- To reproduce the bug, run the playbook with 8 hosts. If run with 4 hosts, the bug does not occur. If run under Ansible 2.1 with 8 hosts or any large number of hosts, the bug does not occur. --> <!--- Paste example playbooks or commands between quotes below --> - name: Populate host group with host names # hosts: host1:host2:host3:host4 hosts: host1:host2:host3:host4:host5:host6:host7:host8 gather_facts: no tasks: - name: task1 group_by: key='mygroup1' - name: Run playbook on all hosts in host group "mygroup1" hosts: mygroup1 gather_facts: no tasks: - name: task1 debug: var=inventory_hostname - name: Show hosts in host group "mygroup1" hosts: localhost gather_facts: no tasks: - name: task1 debug: var=groups['mygroup1']|list <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> TASK [task1] ******************************************************************* ok: [localhost] => { "groups['mygroup1']|list": [ "host1", "host2", "host3", "host4", "host5", "host6", "host7", "host8", ] } ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` TASK [task1] ******************************************************************* ok: [localhost] => { "groups['mygroup1']|list": [ "host1", ] } ```
True
ansible-2.2.0.0 group_by intermittent bug - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> group_by ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.2.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- RHEL6 --> ##### SUMMARY <!--- Explain the problem briefly --> The problem occurs on 2.2 but not on 2.1 A test case was created containing the following statement: debug: var=groups['mygroup1']|list If dynamic host group "mygroup1" created in a previous playbook by the group_by module contains four hosts, four hosts are output by the debug statement. If "mygroup1" contains eight hosts, only the first host is output by the debug statement most of the time but sometimes the second host is output. This is an intermittent problem. The problem does not occur when the host group contains 3 or 4 hosts but always occurs when the group contains 7 or 8 hosts. ##### STEPS TO REPRODUCE <!--- To reproduce the bug, run the playbook with 8 hosts. If run with 4 hosts, the bug does not occur. If run under Ansible 2.1 with 8 hosts or any large number of hosts, the bug does not occur. --> <!--- Paste example playbooks or commands between quotes below --> - name: Populate host group with host names # hosts: host1:host2:host3:host4 hosts: host1:host2:host3:host4:host5:host6:host7:host8 gather_facts: no tasks: - name: task1 group_by: key='mygroup1' - name: Run playbook on all hosts in host group "mygroup1" hosts: mygroup1 gather_facts: no tasks: - name: task1 debug: var=inventory_hostname - name: Show hosts in host group "mygroup1" hosts: localhost gather_facts: no tasks: - name: task1 debug: var=groups['mygroup1']|list <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> TASK [task1] ******************************************************************* ok: [localhost] => { "groups['mygroup1']|list": [ "host1", "host2", "host3", "host4", "host5", "host6", "host7", "host8", ] } ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` TASK [task1] ******************************************************************* ok: [localhost] => { "groups['mygroup1']|list": [ "host1", ] } ```
main
ansible group by intermittent bug issue type bug report component name group by ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment summary the problem occurs on but not on a test case was created containing the following statement debug var groups list if dynamic host group created in a previous playbook by the group by module contains four hosts four hosts are output by the debug statement if contains eight hosts only the first host is output by the debug statement most of the time but sometimes the second host is output this is an intermittent problem the problem does not occur when the host group contains or hosts but always occurs when the group contains or hosts steps to reproduce to reproduce the bug run the playbook with hosts if run with hosts the bug does not occur if run under ansible with hosts or any large number of hosts the bug does not occur name populate host group with host names hosts hosts gather facts no tasks name group by key name run playbook on all hosts in host group hosts gather facts no tasks name debug var inventory hostname name show hosts in host group hosts localhost gather facts no tasks name debug var groups list expected results task ok groups list actual results task ok groups list
1
5,604
28,056,621,391
IssuesEvent
2023-03-29 09:47:37
conbench/conbench
https://api.github.com/repos/conbench/conbench
closed
rename modules/files "benchmark" -> "result"
maintainability UX - terminology
A hyphen in a Python module name (file name) is not advisable. I think the `Result` will be a rather popular entity in the future, and `result.py` is better than `benchmarkresult.py` or `bmresult.py`.
True
rename modules/files "benchmark" -> "result" - A hyphen in a Python module name (file name) is not advisable. I think the `Result` will be a rather popular entity in the future, and `result.py` is better than `benchmarkresult.py` or `bmresult.py`.
main
rename modules files benchmark result a hyphen in a python module name file name is not advisable i think the result will be a rather popular entity in the future and result py is better than benchmarkresult py or bmresult py
1
2,003
6,718,162,588
IssuesEvent
2017-10-15 09:02:19
Kristinita/Erics-Green-Room
https://api.github.com/repos/Kristinita/Erics-Green-Room
closed
[Feature request] ะะพะฒั‹ะต ะผะตั‚ะฐะดะฐะฝะฝั‹ะต
need-maintainer similar-implemented
### 1. ะ—ะฐะฟั€ะพั ะ‘ั‹ะปะพ ะฑั‹ ะฝะตะฟะปะพั…ะพ, ะตัะปะธ ะฒะฒะตะดัƒั‚ัั ะฝะพะฒั‹ะต ะผะตั‚ะฐะดะฐะฝะฝั‹ะต. 1. `*-ex-ะกะฐัˆะฐ ะžัะปะตะฟะธั‚ะตะปัŒะฝะฐ` โ€” `ะŸั€ะธะผะตั€: ะกะฐัˆะฐ ะžัะปะตะฟะธั‚ะตะปัŒะฝะฐ`; 1. `*-cons-ะกะฐัˆะฐ Evening Blossom` โ€” `ะŸะพะดั€ะพะฑะฝั‹ะน ัะพัั‚ะฐะฒ: ะกะฐัˆะฐ Evening Blossom`; 1. `*-video-https://www.youtube.com/watch?v=loHnI5srRKs&index=30&list=LL8lNlwMsbyCE3lB4cnLxYNQ` โ€” `ะ’ะธะดะตะพ: https://www.youtube.com/watch?v=loHnI5srRKs&index=30&list=LL8lNlwMsbyCE3lB4cnLxYNQ`. ะŸะพ ั‚ะพะผัƒ ะถะต ะพะฑั€ะฐะทั†ัƒ, ะฟะพ ะบะพั‚ะพั€ะพะผัƒ ะฑั‹ะปะธ ัะดะตะปะฐะฝั‹ ะผะตั‚ะฐะดะฐะฝะฝั‹ะต `*-info` ะธ `*-proof`, ะฝะธะบะฐะบะพะณะพ ะฝะพะฒะพะณะพ ั„ัƒะฝะบั†ะธะพะฝะฐะปะฐ ะดะปั ะฝะธั… ะฝะต ะฝัƒะถะฝะพ. ### 2. ะั€ะณัƒะผะตะฝั‚ะฐั†ะธั ะฃะดะพะฑะพั‡ะธั‚ะฐะตะผะพัั‚ัŒ ะบะพะผะผะตะฝั‚ะฐั€ะธะตะฒ. ะšะพะผะผะตะฝั‚ะฐั€ะธะน ะบ ะฒะพะฟั€ะพััƒ ะธ ะพั‚ะฒะตั‚ัƒ ะฒะพัะฟั€ะธะฝะธะผะฐะตั‚ัั ะปัƒั‡ัˆะต, ะบะพะณะดะฐ ะพะฝ ั€ะฐะทะดะตะปั‘ะฝ, ะฐ ะฝะต ะฒัั‘ ัะผะตัˆะฐะฝะพ ะฒ ะบัƒั‡ัƒ. ะŸั€ะธะผะตั€ั‹: + `ex` ั…ะพั‡ัƒ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะฒ ะฟะฐะบะตั‚ะต `ะฅัƒะดะพะถะตัั‚ะฒะตะฝะฝั‹ะต ัั€ะตะดัั‚ะฒะฐ ั€ัƒััะบะพะณะพ ัะทั‹ะบะฐ`, + `cons` โ€” ะฒ `ะะปะบะพะณะพะปัŒะฝั‹ั… ะบะพะบั‚ะตะนะปัั…`, + `video` โ€” ะฒ ะฟะฐะบะตั‚ะต `ะคั€ะฐะทั‹ ะธะท ัะพะฒะตั‚ัะบะธั… ั„ะธะปัŒะผะพะฒ`. ### 3. ะŸะตั€ะตะฒะพะดั‹ + `ex` โ€” ะพั‚ `example`, + `cons` โ€” ะพั‚ `consistency`. ะ•ัะปะธ ะทะฐั…ะพะดะธั‚ะต ะฝะฐะทะฒะฐั‚ัŒ ะฟะพ-ะดั€ัƒะณะพะผัƒ, ะฝะต ัั‚ะฐะฝัƒ ะฒะพะทั€ะฐะถะฐั‚ัŒ. ะกะฟะฐัะธะฑะพ.
True
[Feature request] ะะพะฒั‹ะต ะผะตั‚ะฐะดะฐะฝะฝั‹ะต - ### 1. ะ—ะฐะฟั€ะพั ะ‘ั‹ะปะพ ะฑั‹ ะฝะตะฟะปะพั…ะพ, ะตัะปะธ ะฒะฒะตะดัƒั‚ัั ะฝะพะฒั‹ะต ะผะตั‚ะฐะดะฐะฝะฝั‹ะต. 1. `*-ex-ะกะฐัˆะฐ ะžัะปะตะฟะธั‚ะตะปัŒะฝะฐ` โ€” `ะŸั€ะธะผะตั€: ะกะฐัˆะฐ ะžัะปะตะฟะธั‚ะตะปัŒะฝะฐ`; 1. `*-cons-ะกะฐัˆะฐ Evening Blossom` โ€” `ะŸะพะดั€ะพะฑะฝั‹ะน ัะพัั‚ะฐะฒ: ะกะฐัˆะฐ Evening Blossom`; 1. `*-video-https://www.youtube.com/watch?v=loHnI5srRKs&index=30&list=LL8lNlwMsbyCE3lB4cnLxYNQ` โ€” `ะ’ะธะดะตะพ: https://www.youtube.com/watch?v=loHnI5srRKs&index=30&list=LL8lNlwMsbyCE3lB4cnLxYNQ`. ะŸะพ ั‚ะพะผัƒ ะถะต ะพะฑั€ะฐะทั†ัƒ, ะฟะพ ะบะพั‚ะพั€ะพะผัƒ ะฑั‹ะปะธ ัะดะตะปะฐะฝั‹ ะผะตั‚ะฐะดะฐะฝะฝั‹ะต `*-info` ะธ `*-proof`, ะฝะธะบะฐะบะพะณะพ ะฝะพะฒะพะณะพ ั„ัƒะฝะบั†ะธะพะฝะฐะปะฐ ะดะปั ะฝะธั… ะฝะต ะฝัƒะถะฝะพ. ### 2. ะั€ะณัƒะผะตะฝั‚ะฐั†ะธั ะฃะดะพะฑะพั‡ะธั‚ะฐะตะผะพัั‚ัŒ ะบะพะผะผะตะฝั‚ะฐั€ะธะตะฒ. ะšะพะผะผะตะฝั‚ะฐั€ะธะน ะบ ะฒะพะฟั€ะพััƒ ะธ ะพั‚ะฒะตั‚ัƒ ะฒะพัะฟั€ะธะฝะธะผะฐะตั‚ัั ะปัƒั‡ัˆะต, ะบะพะณะดะฐ ะพะฝ ั€ะฐะทะดะตะปั‘ะฝ, ะฐ ะฝะต ะฒัั‘ ัะผะตัˆะฐะฝะพ ะฒ ะบัƒั‡ัƒ. ะŸั€ะธะผะตั€ั‹: + `ex` ั…ะพั‡ัƒ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะฒ ะฟะฐะบะตั‚ะต `ะฅัƒะดะพะถะตัั‚ะฒะตะฝะฝั‹ะต ัั€ะตะดัั‚ะฒะฐ ั€ัƒััะบะพะณะพ ัะทั‹ะบะฐ`, + `cons` โ€” ะฒ `ะะปะบะพะณะพะปัŒะฝั‹ั… ะบะพะบั‚ะตะนะปัั…`, + `video` โ€” ะฒ ะฟะฐะบะตั‚ะต `ะคั€ะฐะทั‹ ะธะท ัะพะฒะตั‚ัะบะธั… ั„ะธะปัŒะผะพะฒ`. ### 3. ะŸะตั€ะตะฒะพะดั‹ + `ex` โ€” ะพั‚ `example`, + `cons` โ€” ะพั‚ `consistency`. ะ•ัะปะธ ะทะฐั…ะพะดะธั‚ะต ะฝะฐะทะฒะฐั‚ัŒ ะฟะพ-ะดั€ัƒะณะพะผัƒ, ะฝะต ัั‚ะฐะฝัƒ ะฒะพะทั€ะฐะถะฐั‚ัŒ. ะกะฟะฐัะธะฑะพ.
main
ะฝะพะฒั‹ะต ะผะตั‚ะฐะดะฐะฝะฝั‹ะต ะทะฐะฟั€ะพั ะฑั‹ะปะพ ะฑั‹ ะฝะตะฟะปะพั…ะพ ะตัะปะธ ะฒะฒะตะดัƒั‚ัั ะฝะพะฒั‹ะต ะผะตั‚ะฐะดะฐะฝะฝั‹ะต ex ัะฐัˆะฐ ะพัะปะตะฟะธั‚ะตะปัŒะฝะฐ โ€” ะฟั€ะธะผะตั€ ัะฐัˆะฐ ะพัะปะตะฟะธั‚ะตะปัŒะฝะฐ cons ัะฐัˆะฐ evening blossom โ€” ะฟะพะดั€ะพะฑะฝั‹ะน ัะพัั‚ะฐะฒ ัะฐัˆะฐ evening blossom video โ€” ะฒะธะดะตะพ ะฟะพ ั‚ะพะผัƒ ะถะต ะพะฑั€ะฐะทั†ัƒ ะฟะพ ะบะพั‚ะพั€ะพะผัƒ ะฑั‹ะปะธ ัะดะตะปะฐะฝั‹ ะผะตั‚ะฐะดะฐะฝะฝั‹ะต info ะธ proof ะฝะธะบะฐะบะพะณะพ ะฝะพะฒะพะณะพ ั„ัƒะฝะบั†ะธะพะฝะฐะปะฐ ะดะปั ะฝะธั… ะฝะต ะฝัƒะถะฝะพ ะฐั€ะณัƒะผะตะฝั‚ะฐั†ะธั ัƒะดะพะฑะพั‡ะธั‚ะฐะตะผะพัั‚ัŒ ะบะพะผะผะตะฝั‚ะฐั€ะธะตะฒ ะบะพะผะผะตะฝั‚ะฐั€ะธะน ะบ ะฒะพะฟั€ะพััƒ ะธ ะพั‚ะฒะตั‚ัƒ ะฒะพัะฟั€ะธะฝะธะผะฐะตั‚ัั ะปัƒั‡ัˆะต ะบะพะณะดะฐ ะพะฝ ั€ะฐะทะดะตะปั‘ะฝ ะฐ ะฝะต ะฒัั‘ ัะผะตัˆะฐะฝะพ ะฒ ะบัƒั‡ัƒ ะฟั€ะธะผะตั€ั‹ ex ั…ะพั‡ัƒ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะฒ ะฟะฐะบะตั‚ะต ั…ัƒะดะพะถะตัั‚ะฒะตะฝะฝั‹ะต ัั€ะตะดัั‚ะฒะฐ ั€ัƒััะบะพะณะพ ัะทั‹ะบะฐ cons โ€” ะฒ ะฐะปะบะพะณะพะปัŒะฝั‹ั… ะบะพะบั‚ะตะนะปัั… video โ€” ะฒ ะฟะฐะบะตั‚ะต ั„ั€ะฐะทั‹ ะธะท ัะพะฒะตั‚ัะบะธั… ั„ะธะปัŒะผะพะฒ ะฟะตั€ะตะฒะพะดั‹ ex โ€” ะพั‚ example cons โ€” ะพั‚ consistency ะตัะปะธ ะทะฐั…ะพะดะธั‚ะต ะฝะฐะทะฒะฐั‚ัŒ ะฟะพ ะดั€ัƒะณะพะผัƒ ะฝะต ัั‚ะฐะฝัƒ ะฒะพะทั€ะฐะถะฐั‚ัŒ ัะฟะฐัะธะฑะพ
1
297,818
25,765,739,732
IssuesEvent
2022-12-09 01:34:23
Myrfion/LinkFree
https://api.github.com/repos/Myrfion/LinkFree
opened
New Testimonial
testimonial
### Name New Testimonial ### Title Title ### Description Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum
1.0
New Testimonial - ### Name New Testimonial ### Title Title ### Description Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum
non_main
new testimonial name new testimonial title title description lorem ipsum is simply dummy text of the printing and typesetting industry lorem ipsum has been the industry s standard dummy text ever since the when an unknown printer took a galley of type and scrambled it to make a type specimen book it has survived not only five centuries but also the leap into electronic typesetting remaining essentially unchanged it was popularised in the with the release of letraset sheets containing lorem ipsum passages and more recently with desktop publishing software like aldus pagemaker including versions of lorem ipsum
0
143,407
19,178,874,572
IssuesEvent
2021-12-04 03:09:46
AlexRogalskiy/github-action-user-contribution
https://api.github.com/repos/AlexRogalskiy/github-action-user-contribution
opened
CVE-2020-8244 (Medium) detected in bl-1.2.3.tgz
security vulnerability
## CVE-2020-8244 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bl-1.2.3.tgz</b></p></summary> <p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p> <p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-1.2.3.tgz">https://registry.npmjs.org/bl/-/bl-1.2.3.tgz</a></p> <p>Path to dependency file: github-action-user-contribution/package.json</p> <p>Path to vulnerable library: github-action-user-contribution/node_modules/bl/package.json</p> <p> Dependency Hierarchy: - dockerfile_lint-0.3.4.tgz (Root Library) - dockerode-2.5.8.tgz - tar-fs-1.16.3.tgz - tar-stream-1.6.2.tgz - :x: **bl-1.2.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-user-contribution/commit/8ff6f02745cca11685859688129e931c06c1b7cc">8ff6f02745cca11685859688129e931c06c1b7cc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A buffer over-read vulnerability exists in bl <4.0.3, <3.0.1, <2.2.1, and <1.2.3 which could allow an attacker to supply user input (even typed) that if it ends up in consume() argument and can become negative, the BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls. <p>Publish Date: 2020-08-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8244>CVE-2020-8244</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-pp7h-53gx-mx7r">https://github.com/advisories/GHSA-pp7h-53gx-mx7r</a></p> <p>Release Date: 2020-08-30</p> <p>Fix Resolution: 2.2.1,3.0.1,4.0.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-8244 (Medium) detected in bl-1.2.3.tgz - ## CVE-2020-8244 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bl-1.2.3.tgz</b></p></summary> <p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p> <p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-1.2.3.tgz">https://registry.npmjs.org/bl/-/bl-1.2.3.tgz</a></p> <p>Path to dependency file: github-action-user-contribution/package.json</p> <p>Path to vulnerable library: github-action-user-contribution/node_modules/bl/package.json</p> <p> Dependency Hierarchy: - dockerfile_lint-0.3.4.tgz (Root Library) - dockerode-2.5.8.tgz - tar-fs-1.16.3.tgz - tar-stream-1.6.2.tgz - :x: **bl-1.2.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-user-contribution/commit/8ff6f02745cca11685859688129e931c06c1b7cc">8ff6f02745cca11685859688129e931c06c1b7cc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A buffer over-read vulnerability exists in bl <4.0.3, <3.0.1, <2.2.1, and <1.2.3 which could allow an attacker to supply user input (even typed) that if it ends up in consume() argument and can become negative, the BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls. <p>Publish Date: 2020-08-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8244>CVE-2020-8244</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-pp7h-53gx-mx7r">https://github.com/advisories/GHSA-pp7h-53gx-mx7r</a></p> <p>Release Date: 2020-08-30</p> <p>Fix Resolution: 2.2.1,3.0.1,4.0.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in bl tgz cve medium severity vulnerability vulnerable library bl tgz buffer list collect buffers and access with a standard readable buffer interface streamable too library home page a href path to dependency file github action user contribution package json path to vulnerable library github action user contribution node modules bl package json dependency hierarchy dockerfile lint tgz root library dockerode tgz tar fs tgz tar stream tgz x bl tgz vulnerable library found in head commit a href found in base branch master vulnerability details a buffer over read vulnerability exists in bl and which could allow an attacker to supply user input even typed that if it ends up in consume argument and can become negative the bufferlist state can be corrupted tricking it into exposing uninitialized memory via regular slice calls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
2,409
8,557,941,692
IssuesEvent
2018-11-08 16:53:09
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
RFE: succeed_when
affects_2.8 cloud feature module needs_maintainer support:community support:core windows
##### SUMMARY A new `succeed_when` argument (with the reverse logic of `failed_when`) would help define the success logic for command and/or shell tasks when there is not a module to manage a certain operation. If the criteria was not met then it is assumed to have failed. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME - command - shell - win_command - win_shell ##### ADDITIONAL INFORMATION Example: ```yaml - name: Create a new libvirt network command: virsh net-define libvirt_network.xml register: define_libvirt_network succeed_when: "Requested operation is not valid: network is already active" in define_libvirt_network.stdout changed_when: define_libvirt_network.rc == 0 ``` (Note: There is a virt_net module that would be better in this case, however this is just an example showcasing how `succeed_when` would work). This could be even more useful if it's flexible enough to work with modules beyond just `command` or `shell`.
True
RFE: succeed_when - ##### SUMMARY A new `succeed_when` argument (with the reverse logic of `failed_when`) would help define the success logic for command and/or shell tasks when there is not a module to manage a certain operation. If the criteria was not met then it is assumed to have failed. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME - command - shell - win_command - win_shell ##### ADDITIONAL INFORMATION Example: ```yaml - name: Create a new libvirt network command: virsh net-define libvirt_network.xml register: define_libvirt_network succeed_when: "Requested operation is not valid: network is already active" in define_libvirt_network.stdout changed_when: define_libvirt_network.rc == 0 ``` (Note: There is a virt_net module that would be better in this case, however this is just an example showcasing how `succeed_when` would work). This could be even more useful if it's flexible enough to work with modules beyond just `command` or `shell`.
main
rfe succeed when summary a new succeed when argument with the reverse logic of failed when would help define the success logic for command and or shell tasks when there is not a module to manage a certain operation if the criteria was not met then it is assumed to have failed issue type feature idea component name command shell win command win shell additional information example yaml name create a new libvirt network command virsh net define libvirt network xml register define libvirt network succeed when requested operation is not valid network is already active in define libvirt network stdout changed when define libvirt network rc note there is a virt net module that would be better in this case however this is just an example showcasing how succeed when would work this could be even more useful if it s flexible enough to work with modules beyond just command or shell
1
1,474
6,399,728,539
IssuesEvent
2017-08-05 02:51:17
DynamoRIO/dynamorio
https://api.github.com/repos/DynamoRIO/dynamorio
closed
improve stack overflow detection
Maintainability Type-Feature
DR has a special crash message for a stack overflow, but is_stack_overflow() only checks the dstack and initstack (i.e., not the signal stack). It is also under ifdef STACK_GUARD_PAGE which is only on for debug builds. Furthermore, we do have -guard_pages on for release builds but is_stack_overflow() does not check for them, only the (extra) guard page taken away from the stack space for STACK_GUARD_PAGE. This issue covers: + having is_stack_overflow() also look for the -guard_pages page + enabling the STACK_GUARD_PAGE code (at least the is_stack_overflow() and crash message) in release build + eliminating the redundant guard page for STACK_GUARD_PAGE + -guard_pages + labeling stack overflow on signal stacks as such Elaborating further on the extra page: With -guard_pages (on by default) there's already a stack guard page, yet we take away one page from each stack for an extra one in debug build: ``` 5484e000-54855000 ---p 00000000 00:00 0 54855000-54856000 r-xp 00000000 00:00 0 54856000-54859000 rw-p 00000000 00:00 0 54859000-5485d000 ---p 00000000 00:00 0 ``` On UNIX, we just bail on a write to the extra page: we don't expand the stack into it. (On Windows we mark it PAGE_GUARD: but on the guard page fault we terminate.) We should change the is_stack_overflow() code to use the inaccessible guard page that's already there and eliminate this extra one for -guard_pages. For -no_guard_pages, IMHO we should also *not* include this extra page in the size: it's misleading to users who know their max stack size yet won't be getting that much when they request it. The original thinking was that we wanted debug build overflow a full page before release build might so we're conservative in our sizing.
True
improve stack overflow detection - DR has a special crash message for a stack overflow, but is_stack_overflow() only checks the dstack and initstack (i.e., not the signal stack). It is also under ifdef STACK_GUARD_PAGE which is only on for debug builds. Furthermore, we do have -guard_pages on for release builds but is_stack_overflow() does not check for them, only the (extra) guard page taken away from the stack space for STACK_GUARD_PAGE. This issue covers: + having is_stack_overflow() also look for the -guard_pages page + enabling the STACK_GUARD_PAGE code (at least the is_stack_overflow() and crash message) in release build + eliminating the redundant guard page for STACK_GUARD_PAGE + -guard_pages + labeling stack overflow on signal stacks as such Elaborating further on the extra page: With -guard_pages (on by default) there's already a stack guard page, yet we take away one page from each stack for an extra one in debug build: ``` 5484e000-54855000 ---p 00000000 00:00 0 54855000-54856000 r-xp 00000000 00:00 0 54856000-54859000 rw-p 00000000 00:00 0 54859000-5485d000 ---p 00000000 00:00 0 ``` On UNIX, we just bail on a write to the extra page: we don't expand the stack into it. (On Windows we mark it PAGE_GUARD: but on the guard page fault we terminate.) We should change the is_stack_overflow() code to use the inaccessible guard page that's already there and eliminate this extra one for -guard_pages. For -no_guard_pages, IMHO we should also *not* include this extra page in the size: it's misleading to users who know their max stack size yet won't be getting that much when they request it. The original thinking was that we wanted debug build overflow a full page before release build might so we're conservative in our sizing.
main
improve stack overflow detection dr has a special crash message for a stack overflow but is stack overflow only checks the dstack and initstack i e not the signal stack it is also under ifdef stack guard page which is only on for debug builds furthermore we do have guard pages on for release builds but is stack overflow does not check for them only the extra guard page taken away from the stack space for stack guard page this issue covers having is stack overflow also look for the guard pages page enabling the stack guard page code at least the is stack overflow and crash message in release build eliminating the redundant guard page for stack guard page guard pages labeling stack overflow on signal stacks as such elaborating further on the extra page with guard pages on by default there s already a stack guard page yet we take away one page from each stack for an extra one in debug build p r xp rw p p on unix we just bail on a write to the extra page we don t expand the stack into it on windows we mark it page guard but on the guard page fault we terminate we should change the is stack overflow code to use the inaccessible guard page that s already there and eliminate this extra one for guard pages for no guard pages imho we should also not include this extra page in the size it s misleading to users who know their max stack size yet won t be getting that much when they request it the original thinking was that we wanted debug build overflow a full page before release build might so we re conservative in our sizing
1
371
3,367,371,706
IssuesEvent
2015-11-22 04:09:16
codeforsanjose/codeforsanjose
https://api.github.com/repos/codeforsanjose/codeforsanjose
opened
No Rubocop violations
maintainability pickup task
Good code style is important. As of the time of this ticket, `rake rubocop` currently reports 56 violations in the the develop and production branches. They must be cleansed. This is a perfect pickup task.
True
No Rubocop violations - Good code style is important. As of the time of this ticket, `rake rubocop` currently reports 56 violations in the the develop and production branches. They must be cleansed. This is a perfect pickup task.
main
no rubocop violations good code style is important as of the time of this ticket rake rubocop currently reports violations in the the develop and production branches they must be cleansed this is a perfect pickup task
1
1,831
6,577,356,939
IssuesEvent
2017-09-12 00:20:48
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
linode module: "name" parameter required, but documentation says it isn't
affects_2.1 bug_report cloud docs_report waiting_on_maintainer
##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME linode module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running Ansible from OS X 10.11.5 ##### SUMMARY The documentation for the linode module at http://docs.ansible.com/ansible/linode_module.html claims that the "name" parameter is not required, but I seem to be unable to successfully use the linode module without it, it says "name is required for active state". (P.S. I don't actually understand what value the "name" parameter should have, but using my server's hostname "caprice" makes the playbook run fine. What is the purpose of the "name" parameter, and how is it different from the "linode_id" parameter?) ##### STEPS TO REPRODUCE Here's a sample playbook named "reboot.yml": ``` --- - hosts: caprice tasks: - name: Reboot the server local_action: module: linode api_key: "{{ linode_api_key }}" # name: caprice linode_id: "{{ linode_id }}" state: restarted ``` ##### EXPECTED RESULTS I expected the playbook to run successfully without the "name" parameter. ##### ACTUAL RESULTS ``` Vin:ansible nelson$ ansible-playbook reboot.yml --ask-vault-pass -vvvv No config file found; using defaults Vault password: Loaded callback default of type stdout, v2.0 PLAYBOOK: reboot.yml *********************************************************** 1 plays in reboot.yml PLAY [caprice] ***************************************************************** TASK [setup] ******************************************************************* <caprice> ESTABLISH SSH CONNECTION FOR USER: None <caprice> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r caprice '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `" && echo ansible-tmp-1465211261.42-104881048472456="` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `" ) && sleep 0'"'"'' <caprice> PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpjrQJWt TO /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup <caprice> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r '[caprice]' <caprice> ESTABLISH SSH CONNECTION FOR USER: None <caprice> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r -tt caprice '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup; rm -rf "/home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/" > /dev/null 2>&1 && sleep 0'"'"'' ok: [caprice] TASK [Reboot the server] ******************************************************* task path: /Users/nelson/Code/server_documents/ansible/reboot.yml:5 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: nelson <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `" && echo ansible-tmp-1465211263.04-60988620546823="` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `" ) && sleep 0' <localhost> PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpMBOJkL TO /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode; rm -rf "/Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/" > /dev/null 2>&1 && sleep 0' fatal: [caprice -> localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"api_key": "vMQG7JAhCOKxDkVogfBVMg6vMwxiow0P0Q2Pt4XSOb566Bvt6yKFFhuDyBzGYw6V", "datacenter": null, "distribution": null, "linode_id": 1814698, "name": null, "password": null, "payment_term": 1, "plan": null, "ssh_pub_key": null, "state": "restarted", "swap": 512, "wait": true, "wait_timeout": "300"}, "module_name": "linode"}, "msg": "name is required for active state"} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @reboot.retry PLAY RECAP ********************************************************************* caprice : ok=1 changed=0 unreachable=0 failed=1 Vin:ansible nelson$ ```
True
linode module: "name" parameter required, but documentation says it isn't - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME linode module ##### ANSIBLE VERSION ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT Running Ansible from OS X 10.11.5 ##### SUMMARY The documentation for the linode module at http://docs.ansible.com/ansible/linode_module.html claims that the "name" parameter is not required, but I seem to be unable to successfully use the linode module without it, it says "name is required for active state". (P.S. I don't actually understand what value the "name" parameter should have, but using my server's hostname "caprice" makes the playbook run fine. What is the purpose of the "name" parameter, and how is it different from the "linode_id" parameter?) ##### STEPS TO REPRODUCE Here's a sample playbook named "reboot.yml": ``` --- - hosts: caprice tasks: - name: Reboot the server local_action: module: linode api_key: "{{ linode_api_key }}" # name: caprice linode_id: "{{ linode_id }}" state: restarted ``` ##### EXPECTED RESULTS I expected the playbook to run successfully without the "name" parameter. ##### ACTUAL RESULTS ``` Vin:ansible nelson$ ansible-playbook reboot.yml --ask-vault-pass -vvvv No config file found; using defaults Vault password: Loaded callback default of type stdout, v2.0 PLAYBOOK: reboot.yml *********************************************************** 1 plays in reboot.yml PLAY [caprice] ***************************************************************** TASK [setup] ******************************************************************* <caprice> ESTABLISH SSH CONNECTION FOR USER: None <caprice> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r caprice '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `" && echo ansible-tmp-1465211261.42-104881048472456="` echo $HOME/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456 `" ) && sleep 0'"'"'' <caprice> PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpjrQJWt TO /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup <caprice> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r '[caprice]' <caprice> ESTABLISH SSH CONNECTION FOR USER: None <caprice> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/nelson/.ansible/cp/ansible-ssh-%h-%p-%r -tt caprice '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/setup; rm -rf "/home/nelson/.ansible/tmp/ansible-tmp-1465211261.42-104881048472456/" > /dev/null 2>&1 && sleep 0'"'"'' ok: [caprice] TASK [Reboot the server] ******************************************************* task path: /Users/nelson/Code/server_documents/ansible/reboot.yml:5 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: nelson <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `" && echo ansible-tmp-1465211263.04-60988620546823="` echo $HOME/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823 `" ) && sleep 0' <localhost> PUT /var/folders/wj/fj_s9pp157xb_c7hb_r91rtm0000gn/T/tmpMBOJkL TO /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/linode; rm -rf "/Users/nelson/.ansible/tmp/ansible-tmp-1465211263.04-60988620546823/" > /dev/null 2>&1 && sleep 0' fatal: [caprice -> localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"api_key": "vMQG7JAhCOKxDkVogfBVMg6vMwxiow0P0Q2Pt4XSOb566Bvt6yKFFhuDyBzGYw6V", "datacenter": null, "distribution": null, "linode_id": 1814698, "name": null, "password": null, "payment_term": 1, "plan": null, "ssh_pub_key": null, "state": "restarted", "swap": 512, "wait": true, "wait_timeout": "300"}, "module_name": "linode"}, "msg": "name is required for active state"} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @reboot.retry PLAY RECAP ********************************************************************* caprice : ok=1 changed=0 unreachable=0 failed=1 Vin:ansible nelson$ ```
main
linode module name parameter required but documentation says it isn t issue type documentation report component name linode module ansible version ansible config file configured module search path default w o overrides configuration os environment running ansible from os x summary the documentation for the linode module at claims that the name parameter is not required but i seem to be unable to successfully use the linode module without it it says name is required for active state p s i don t actually understand what value the name parameter should have but using my server s hostname caprice makes the playbook run fine what is the purpose of the name parameter and how is it different from the linode id parameter steps to reproduce here s a sample playbook named reboot yml hosts caprice tasks name reboot the server local action module linode api key linode api key name caprice linode id linode id state restarted expected results i expected the playbook to run successfully without the name parameter actual results vin ansible nelson ansible playbook reboot yml ask vault pass vvvv no config file found using defaults vault password loaded callback default of type stdout playbook reboot yml plays in reboot yml play task establish ssh connection for user none ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r caprice bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders wj fj t tmpjrqjwt to home nelson ansible tmp ansible tmp setup ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r establish ssh connection for user none ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath users nelson ansible cp ansible ssh h p r tt caprice bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home nelson ansible tmp ansible tmp setup rm rf home nelson ansible tmp ansible tmp dev null sleep ok task task path users nelson code server documents ansible reboot yml establish local connection for user nelson exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders wj fj t tmpmbojkl to users nelson ansible tmp ansible tmp linode exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users nelson ansible tmp ansible tmp linode rm rf users nelson ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args api key datacenter null distribution null linode id name null password null payment term plan null ssh pub key null state restarted swap wait true wait timeout module name linode msg name is required for active state no more hosts left to retry use limit reboot retry play recap caprice ok changed unreachable failed vin ansible nelson
1
419,737
12,227,709,738
IssuesEvent
2020-05-03 16:21:12
crispy-computing-machine/Winbinder
https://api.github.com/repos/crispy-computing-machine/Winbinder
closed
Cannot remove icon from toolbar buttons
Low priority bug help wanted
If no index (null or empty index) is specified to a toolbar button, no icon should appear. Icon number zero is appearing instead. Workaround: use some non-existnt number as the index
1.0
Cannot remove icon from toolbar buttons - If no index (null or empty index) is specified to a toolbar button, no icon should appear. Icon number zero is appearing instead. Workaround: use some non-existnt number as the index
non_main
cannot remove icon from toolbar buttons if no index null or empty index is specified to a toolbar button no icon should appear icon number zero is appearing instead workaround use some non existnt number as the index
0
144,928
19,318,932,329
IssuesEvent
2021-12-14 01:40:52
txh51591/tm-repo
https://api.github.com/repos/txh51591/tm-repo
opened
CVE-2020-15250 (Medium) detected in junit-4.12.jar
security vulnerability
## CVE-2020-15250 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>junit-4.12.jar</b></p></summary> <p>JUnit is a unit testing framework for Java, created by Erich Gamma and Kent Beck.</p> <p>Library home page: <a href="http://junit.org">http://junit.org</a></p> <p>Path to dependency file: tm-repo/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.12/junit-4.12.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-test-2.1.1.RELEASE.jar (Root Library) - :x: **junit-4.12.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In JUnit4 from version 4.7 and before 4.13.1, the test rule TemporaryFolder contains a local information disclosure vulnerability. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. This vulnerability impacts you if the JUnit tests write sensitive information, like API keys or passwords, into the temporary folder, and the JUnit tests execute in an environment where the OS has other untrusted users. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. For Java 1.7 and higher users: this vulnerability is fixed in 4.13.1. For Java 1.6 and lower users: no patch is available, you must use the workaround below. If you are unable to patch, or are stuck running on Java 1.6, specifying the `java.io.tmpdir` system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability. For more information, including an example of vulnerable code, see the referenced GitHub Security Advisory. <p>Publish Date: 2020-10-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15250>CVE-2020-15250</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp">https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp</a></p> <p>Release Date: 2020-10-12</p> <p>Fix Resolution: junit:junit:4.13.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-15250 (Medium) detected in junit-4.12.jar - ## CVE-2020-15250 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>junit-4.12.jar</b></p></summary> <p>JUnit is a unit testing framework for Java, created by Erich Gamma and Kent Beck.</p> <p>Library home page: <a href="http://junit.org">http://junit.org</a></p> <p>Path to dependency file: tm-repo/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/junit/junit/4.12/junit-4.12.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-test-2.1.1.RELEASE.jar (Root Library) - :x: **junit-4.12.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In JUnit4 from version 4.7 and before 4.13.1, the test rule TemporaryFolder contains a local information disclosure vulnerability. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. This vulnerability impacts you if the JUnit tests write sensitive information, like API keys or passwords, into the temporary folder, and the JUnit tests execute in an environment where the OS has other untrusted users. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. For Java 1.7 and higher users: this vulnerability is fixed in 4.13.1. For Java 1.6 and lower users: no patch is available, you must use the workaround below. If you are unable to patch, or are stuck running on Java 1.6, specifying the `java.io.tmpdir` system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability. For more information, including an example of vulnerable code, see the referenced GitHub Security Advisory. <p>Publish Date: 2020-10-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15250>CVE-2020-15250</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp">https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp</a></p> <p>Release Date: 2020-10-12</p> <p>Fix Resolution: junit:junit:4.13.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in junit jar cve medium severity vulnerability vulnerable library junit jar junit is a unit testing framework for java created by erich gamma and kent beck library home page a href path to dependency file tm repo pom xml path to vulnerable library home wss scanner repository junit junit junit jar dependency hierarchy spring boot starter test release jar root library x junit jar vulnerable library found in base branch master vulnerability details in from version and before the test rule temporaryfolder contains a local information disclosure vulnerability on unix like systems the system s temporary directory is shared between all users on that system because of this when files and directories are written into this directory they are by default readable by other users on that same system this vulnerability does not allow other users to overwrite the contents of these directories or files this is purely an information disclosure vulnerability this vulnerability impacts you if the junit tests write sensitive information like api keys or passwords into the temporary folder and the junit tests execute in an environment where the os has other untrusted users because certain jdk file system apis were only added in jdk this this fix is dependent upon the version of the jdk you are using for java and higher users this vulnerability is fixed in for java and lower users no patch is available you must use the workaround below if you are unable to patch or are stuck running on java specifying the java io tmpdir system environment variable to a directory that is exclusively owned by the executing user will fix this vulnerability for more information including an example of vulnerable code see the referenced github security advisory publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution junit junit step up your open source security game with whitesource
0
1,322
5,658,289,757
IssuesEvent
2017-04-10 09:40:34
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ec2_ami not handling wait:no and tags correctly
affects_2.1 aws bug_report cloud waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami.py ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I have three problems with the current execution: 1. wait: no still waits. It still waits till the AMI is available or a time_out. 2. As shown below, the tags are present but not displayed as output. 3. When the wait time expires, the AMI is created however the tags are ignored, suggesting they are only added after the wait has expired. I would expect this to happen before on wait: no ##### STEPS TO REPRODUCE Playbook: ``` - name: create AMI backup ec2_ami: region: "{{ ec2_region }}" instance_id: "{{ ec2_id }}" wait: no no_reboot: yes name: "{{ ec2_tag_Name }}" tags: creation: "{{ ansible_date_time.epoch }}" expiration: "{{ expiration_date.stdout }}" register: output ``` ##### EXPECTED RESULTS I expected the following results: 1. The playbook moves on and does not wait for results 2. Tags are shown in the output 3. Tags show in the AWS console after a time_out has been reached. ##### ACTUAL RESULTS Output on a normal run (after waiting for the task to complete): ``` TASK [create AMI backup] ******************************************************* changed: [*.*.*.*] TASK [debug] ******************************************************************* ok: [*.*.*.*] => { "output": { "architecture": "x86_64", "block_device_mapping": { "/dev/sda1": { "delete_on_termination": true, "encrypted": false, "size": 8, "snapshot_id": "snap-*******", "volume_type": "gp2" } }, "changed": true, "creationDate": "2016-06-29T09:31:56.000Z", "description": null, "hypervisor": "xen", "image_id": "ami-******", "is_public": false, "location": "*******/*****-2016-06-monthly", "msg": "AMI creation operation complete", "ownerId": "*******", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "available", "tags": {}, "virtualization_type": "hvm" } } ``` In the situation where a time_out occurs: ``` 09:12:36 TASK [create AMI backup] ******************************************************* 09:28:18 fatal: [*.*.*.* -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help."} ``` Also notice how this is by no means the 300 seconds ( 5 minutes ) specified as the default. I think this module needs a good once over to verify the combination of delegation, tags and wait functions properly. Thanks!
True
ec2_ami not handling wait:no and tags correctly - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_ami.py ##### ANSIBLE VERSION ``` ansible 2.1.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY I have three problems with the current execution: 1. wait: no still waits. It still waits till the AMI is available or a time_out. 2. As shown below, the tags are present but not displayed as output. 3. When the wait time expires, the AMI is created however the tags are ignored, suggesting they are only added after the wait has expired. I would expect this to happen before on wait: no ##### STEPS TO REPRODUCE Playbook: ``` - name: create AMI backup ec2_ami: region: "{{ ec2_region }}" instance_id: "{{ ec2_id }}" wait: no no_reboot: yes name: "{{ ec2_tag_Name }}" tags: creation: "{{ ansible_date_time.epoch }}" expiration: "{{ expiration_date.stdout }}" register: output ``` ##### EXPECTED RESULTS I expected the following results: 1. The playbook moves on and does not wait for results 2. Tags are shown in the output 3. Tags show in the AWS console after a time_out has been reached. ##### ACTUAL RESULTS Output on a normal run (after waiting for the task to complete): ``` TASK [create AMI backup] ******************************************************* changed: [*.*.*.*] TASK [debug] ******************************************************************* ok: [*.*.*.*] => { "output": { "architecture": "x86_64", "block_device_mapping": { "/dev/sda1": { "delete_on_termination": true, "encrypted": false, "size": 8, "snapshot_id": "snap-*******", "volume_type": "gp2" } }, "changed": true, "creationDate": "2016-06-29T09:31:56.000Z", "description": null, "hypervisor": "xen", "image_id": "ami-******", "is_public": false, "location": "*******/*****-2016-06-monthly", "msg": "AMI creation operation complete", "ownerId": "*******", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "available", "tags": {}, "virtualization_type": "hvm" } } ``` In the situation where a time_out occurs: ``` 09:12:36 TASK [create AMI backup] ******************************************************* 09:28:18 fatal: [*.*.*.* -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error while trying to find the new image. Using wait=yes and/or a longer wait_timeout may help."} ``` Also notice how this is by no means the 300 seconds ( 5 minutes ) specified as the default. I think this module needs a good once over to verify the combination of delegation, tags and wait functions properly. Thanks!
main
ami not handling wait no and tags correctly issue type bug report component name ami py ansible version ansible configuration n a os environment n a summary i have three problems with the current execution wait no still waits it still waits till the ami is available or a time out as shown below the tags are present but not displayed as output when the wait time expires the ami is created however the tags are ignored suggesting they are only added after the wait has expired i would expect this to happen before on wait no steps to reproduce playbook name create ami backup ami region region instance id id wait no no reboot yes name tag name tags creation ansible date time epoch expiration expiration date stdout register output expected results i expected the following results the playbook moves on and does not wait for results tags are shown in the output tags show in the aws console after a time out has been reached actual results output on a normal run after waiting for the task to complete task changed task ok output architecture block device mapping dev delete on termination true encrypted false size snapshot id snap volume type changed true creationdate description null hypervisor xen image id ami is public false location monthly msg ami creation operation complete ownerid root device name dev root device type ebs state available tags virtualization type hvm in the situation where a time out occurs task fatal failed changed false failed true msg error while trying to find the new image using wait yes and or a longer wait timeout may help also notice how this is by no means the seconds minutes specified as the default i think this module needs a good once over to verify the combination of delegation tags and wait functions properly thanks
1
3,765
15,826,212,452
IssuesEvent
2021-04-06 07:06:47
pace/bricks
https://api.github.com/repos/pace/bricks
closed
Introduce dedicated and secondary client for couchdb backend health checks
T::Maintainance
# Motivation Using long-polling/streaming features with couchdb somehow causes the healthcheck calls to be blocked. # Idea Change https://github.com/pace/bricks/blob/master/backend/couchdb/health_check.go to use a second client dedicated to just performing health checks.
True
Introduce dedicated and secondary client for couchdb backend health checks - # Motivation Using long-polling/streaming features with couchdb somehow causes the healthcheck calls to be blocked. # Idea Change https://github.com/pace/bricks/blob/master/backend/couchdb/health_check.go to use a second client dedicated to just performing health checks.
main
introduce dedicated and secondary client for couchdb backend health checks motivation using long polling streaming features with couchdb somehow causes the healthcheck calls to be blocked idea change to use a second client dedicated to just performing health checks
1
219,483
24,498,510,264
IssuesEvent
2022-10-10 10:46:18
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
[Security Solution]Filter In, Filter Out and Timeline investigation not working on host.risk.calculated_level and user.risk.calculated_level
bug impact:medium Team:Detections and Resp fixed Team: SecuritySolution Team: CTI v8.5.0
**Describe the bug** Filter In, Filter Out and Timeline investigation not working on host.risk.calculated_level and user.risk.calculated_level **Build Details:** ``` VERSION: 8.5.0 BC1 BUILD: 56595 COMMIT: 0d8de4df69f8084a94cdd9638d7de510813cb5ce ``` **Steps** - Login to kibana deployment - Generate alert on the build - Enable host and user risk - generate alert with host and user risk enrichment - Perform hover action like filter in, filter out, timeline investigation and observed that they are not working in expected manner . please look into the screen-case for observations. **Screen-Cast:** https://user-images.githubusercontent.com/59917825/191926797-47fac442-c8cc-41f7-9970-c2421fcd1f95.mp4 https://user-images.githubusercontent.com/59917825/191927566-65d88072-82c0-4938-887b-7274aa53211c.mp4
True
[Security Solution]Filter In, Filter Out and Timeline investigation not working on host.risk.calculated_level and user.risk.calculated_level - **Describe the bug** Filter In, Filter Out and Timeline investigation not working on host.risk.calculated_level and user.risk.calculated_level **Build Details:** ``` VERSION: 8.5.0 BC1 BUILD: 56595 COMMIT: 0d8de4df69f8084a94cdd9638d7de510813cb5ce ``` **Steps** - Login to kibana deployment - Generate alert on the build - Enable host and user risk - generate alert with host and user risk enrichment - Perform hover action like filter in, filter out, timeline investigation and observed that they are not working in expected manner . please look into the screen-case for observations. **Screen-Cast:** https://user-images.githubusercontent.com/59917825/191926797-47fac442-c8cc-41f7-9970-c2421fcd1f95.mp4 https://user-images.githubusercontent.com/59917825/191927566-65d88072-82c0-4938-887b-7274aa53211c.mp4
non_main
filter in filter out and timeline investigation not working on host risk calculated level and user risk calculated level describe the bug filter in filter out and timeline investigation not working on host risk calculated level and user risk calculated level build details version build commit steps login to kibana deployment generate alert on the build enable host and user risk generate alert with host and user risk enrichment perform hover action like filter in filter out timeline investigation and observed that they are not working in expected manner please look into the screen case for observations screen cast
0
230,300
7,606,463,144
IssuesEvent
2018-04-30 13:28:09
minishift/minishift
https://api.github.com/repos/minishift/minishift
closed
minishift hostfolder add --interactive should ask type and Label interactively
kind/bug priority/major
Currently, if you want to add a hostfolder interactively then you need to provide type (cifs, sshfs) and hostfolder label as part of it argument. ``` $ minishift hostfolder add MY_LABEL --type [cifs|sshfs] --interactive ``` Instead of this our interactive mode suppose to be ask everything from the user along with label and the type and then make decision what to prompt ``` $ minishift hostfolder add --interactive TYPE: <user input> LABEL: <user input> ... ```
1.0
minishift hostfolder add --interactive should ask type and Label interactively - Currently, if you want to add a hostfolder interactively then you need to provide type (cifs, sshfs) and hostfolder label as part of it argument. ``` $ minishift hostfolder add MY_LABEL --type [cifs|sshfs] --interactive ``` Instead of this our interactive mode suppose to be ask everything from the user along with label and the type and then make decision what to prompt ``` $ minishift hostfolder add --interactive TYPE: <user input> LABEL: <user input> ... ```
non_main
minishift hostfolder add interactive should ask type and label interactively currently if you want to add a hostfolder interactively then you need to provide type cifs sshfs and hostfolder label as part of it argument minishift hostfolder add my label type interactive instead of this our interactive mode suppose to be ask everything from the user along with label and the type and then make decision what to prompt minishift hostfolder add interactive type label
0
5,879
32,003,582,845
IssuesEvent
2023-09-21 13:39:48
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
opened
Expose config via actuator and not logging
kind/toil area/observability area/maintainability component/gateway component/broker
**Description** Spring already comes with the concept of sanitizing the environment to hide sensitive data, configured via custom `SanitizingFunction` implementations. These will sanitize all properties visible in the `/env` and `/configprops` actuators. These only show us the user provided configuration, though. Additionally, we can add the config as a custom `InfoContributor` and display it on the `/info` endpoint if we want, which will show us the effective configuration - but then we have to manually sanitize the outputs, which is less attractive. Finally, we can add our own custom endpoint for this. What's the advantage over logging the config? 1. Security - while we sanitize the config output, it's very basic, and putting it behind a management endpoint allows users to configure the security to access such things. 2. In many cases, looking up the config requires scrolling to the logs to find where it was. Having it readily accessible via an endpoint is very useful - we can ask users to just curl and send us the output. 3. Often, we don't really care to look at the config, so it's just additional logging noise 4. It would cut down on the Spring dependencies incurred via the `ObjectWriterFactory` by just moving it all into actuators.
True
Expose config via actuator and not logging - **Description** Spring already comes with the concept of sanitizing the environment to hide sensitive data, configured via custom `SanitizingFunction` implementations. These will sanitize all properties visible in the `/env` and `/configprops` actuators. These only show us the user provided configuration, though. Additionally, we can add the config as a custom `InfoContributor` and display it on the `/info` endpoint if we want, which will show us the effective configuration - but then we have to manually sanitize the outputs, which is less attractive. Finally, we can add our own custom endpoint for this. What's the advantage over logging the config? 1. Security - while we sanitize the config output, it's very basic, and putting it behind a management endpoint allows users to configure the security to access such things. 2. In many cases, looking up the config requires scrolling to the logs to find where it was. Having it readily accessible via an endpoint is very useful - we can ask users to just curl and send us the output. 3. Often, we don't really care to look at the config, so it's just additional logging noise 4. It would cut down on the Spring dependencies incurred via the `ObjectWriterFactory` by just moving it all into actuators.
main
expose config via actuator and not logging description spring already comes with the concept of sanitizing the environment to hide sensitive data configured via custom sanitizingfunction implementations these will sanitize all properties visible in the env and configprops actuators these only show us the user provided configuration though additionally we can add the config as a custom infocontributor and display it on the info endpoint if we want which will show us the effective configuration but then we have to manually sanitize the outputs which is less attractive finally we can add our own custom endpoint for this what s the advantage over logging the config security while we sanitize the config output it s very basic and putting it behind a management endpoint allows users to configure the security to access such things in many cases looking up the config requires scrolling to the logs to find where it was having it readily accessible via an endpoint is very useful we can ask users to just curl and send us the output often we don t really care to look at the config so it s just additional logging noise it would cut down on the spring dependencies incurred via the objectwriterfactory by just moving it all into actuators
1
155,300
13,617,911,751
IssuesEvent
2020-09-23 17:43:43
gopasspw/gopass
https://api.github.com/repos/gopasspw/gopass
closed
Certain flag combinations are undocumented or no-ops
bug documentation
It is possible sometimes to use a combination of flags that does nothing: ``` gopass generate test/noops 20 gopass show -c -u test/noops gopass show -c -u -o test/noops ``` Also using: ``` gopass show -C -u test/noops gopass show -C -o -u test/noops ``` will show all the content but not copy to the clipboard. But `gopass show -C -o test/noops` works as I would expect it to. There are probably other cases, but most issues seems to arise when using the `-u` flag
1.0
Certain flag combinations are undocumented or no-ops - It is possible sometimes to use a combination of flags that does nothing: ``` gopass generate test/noops 20 gopass show -c -u test/noops gopass show -c -u -o test/noops ``` Also using: ``` gopass show -C -u test/noops gopass show -C -o -u test/noops ``` will show all the content but not copy to the clipboard. But `gopass show -C -o test/noops` works as I would expect it to. There are probably other cases, but most issues seems to arise when using the `-u` flag
non_main
certain flag combinations are undocumented or no ops it is possible sometimes to use a combination of flags that does nothing gopass generate test noops gopass show c u test noops gopass show c u o test noops also using gopass show c u test noops gopass show c o u test noops will show all the content but not copy to the clipboard but gopass show c o test noops works as i would expect it to there are probably other cases but most issues seems to arise when using the u flag
0
69,054
14,970,048,421
IssuesEvent
2021-01-27 19:02:04
jgeraigery/SilverKing
https://api.github.com/repos/jgeraigery/SilverKing
opened
CVE-2017-17485 (High) detected in jackson-databind-2.6.7.1.jar
security vulnerability
## CVE-2017-17485 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.7.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: SilverKing/lib/aws-java-sdk-1.11.333/third-party/lib/jackson-databind-2.6.7.1.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.6.7.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/SilverKing/commit/8ba31a514d374422e5f4712cf554ef10ac674e5a">8ba31a514d374422e5f4712cf554ef10ac674e5a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind through 2.8.10 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the Spring libraries are available in the classpath. <p>Publish Date: 2018-01-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-17485>CVE-2017-17485</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Change files</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/commit/2235894210c75f624a3d0cd60bfb0434a20a18bf">https://github.com/FasterXML/jackson-databind/commit/2235894210c75f624a3d0cd60bfb0434a20a18bf</a></p> <p>Release Date: 2017-12-19</p> <p>Fix Resolution: Replace or update the following files: SubTypeValidator.java, BeanDeserializerFactory.java</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.7.1","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.7.1","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2017-17485","vulnerabilityDetails":"FasterXML jackson-databind through 2.8.10 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the Spring libraries are available in the classpath.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-17485","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
True
CVE-2017-17485 (High) detected in jackson-databind-2.6.7.1.jar - ## CVE-2017-17485 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.7.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: SilverKing/lib/aws-java-sdk-1.11.333/third-party/lib/jackson-databind-2.6.7.1.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.6.7.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/SilverKing/commit/8ba31a514d374422e5f4712cf554ef10ac674e5a">8ba31a514d374422e5f4712cf554ef10ac674e5a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind through 2.8.10 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the Spring libraries are available in the classpath. <p>Publish Date: 2018-01-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-17485>CVE-2017-17485</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Change files</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/commit/2235894210c75f624a3d0cd60bfb0434a20a18bf">https://github.com/FasterXML/jackson-databind/commit/2235894210c75f624a3d0cd60bfb0434a20a18bf</a></p> <p>Release Date: 2017-12-19</p> <p>Fix Resolution: Replace or update the following files: SubTypeValidator.java, BeanDeserializerFactory.java</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.7.1","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.7.1","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2017-17485","vulnerabilityDetails":"FasterXML jackson-databind through 2.8.10 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the Spring libraries are available in the classpath.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-17485","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
non_main
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library silverking lib aws java sdk third party lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind through and x through allows unauthenticated remote code execution because of an incomplete fix for the cve deserialization flaw this is exploitable by sending maliciously crafted json input to the readvalue method of the objectmapper bypassing a blacklist that is ineffective if the spring libraries are available in the classpath publish date url a href cvss score details base score metrics not available suggested fix type change files origin a href release date fix resolution replace or update the following files subtypevalidator java beandeserializerfactory java isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind through and x through allows unauthenticated remote code execution because of an incomplete fix for the cve deserialization flaw this is exploitable by sending maliciously crafted json input to the readvalue method of the objectmapper bypassing a blacklist that is ineffective if the spring libraries are available in the classpath vulnerabilityurl
0
334,620
10,142,891,035
IssuesEvent
2019-08-04 06:39:11
jenkins-x/jx
https://api.github.com/repos/jenkins-x/jx
closed
support ChatOps with custom Jenkins Servers via tekton
area/import area/prow area/serverless-jenkins kind/enhancement lifecycle/rotten priority/important-longterm
for folks who have custom Jenkins servers with custom Pipelines defined in a `Jenkinsfile` it would be great to be able to orchestrate pipelines via Prow + tekton but then actually run the pipelines in the custom Jenkins servers. see the docs for more details on how to run a custom Jenkins App: https://jenkins-x.io/architecture/custom-jenkins/
1.0
support ChatOps with custom Jenkins Servers via tekton - for folks who have custom Jenkins servers with custom Pipelines defined in a `Jenkinsfile` it would be great to be able to orchestrate pipelines via Prow + tekton but then actually run the pipelines in the custom Jenkins servers. see the docs for more details on how to run a custom Jenkins App: https://jenkins-x.io/architecture/custom-jenkins/
non_main
support chatops with custom jenkins servers via tekton for folks who have custom jenkins servers with custom pipelines defined in a jenkinsfile it would be great to be able to orchestrate pipelines via prow tekton but then actually run the pipelines in the custom jenkins servers see the docs for more details on how to run a custom jenkins app
0
5,267
26,632,944,066
IssuesEvent
2023-01-24 19:19:41
makubacki/mu_devops
https://api.github.com/repos/makubacki/mu_devops
closed
[Bug]: Test 3
state:needs-triage state:needs-owner state:needs-maintainer-feedback state:wont-fix type:bug urgency:medium
### Is there an existing issue for this? - [X] I have searched existing issues ### Current Behavior Test ### Expected Behavior Test ### Steps To Reproduce Test ### Build Environment ```markdown - OS(s): Test - Tool Chain(s): Test - Targets Impacted: Test ``` ### Version Information ```text Test ``` ### Urgency Medium ### Are you going to fix this? Someone else needs to fix it ### Do you need maintainer feedback? Maintainer feedback requested ### Anything else? _No response_
True
[Bug]: Test 3 - ### Is there an existing issue for this? - [X] I have searched existing issues ### Current Behavior Test ### Expected Behavior Test ### Steps To Reproduce Test ### Build Environment ```markdown - OS(s): Test - Tool Chain(s): Test - Targets Impacted: Test ``` ### Version Information ```text Test ``` ### Urgency Medium ### Are you going to fix this? Someone else needs to fix it ### Do you need maintainer feedback? Maintainer feedback requested ### Anything else? _No response_
main
test is there an existing issue for this i have searched existing issues current behavior test expected behavior test steps to reproduce test build environment markdown os s test tool chain s test targets impacted test version information text test urgency medium are you going to fix this someone else needs to fix it do you need maintainer feedback maintainer feedback requested anything else no response
1
197,910
14,948,991,409
IssuesEvent
2021-01-26 10:52:01
infinispan/infinispan-operator
https://api.github.com/repos/infinispan/infinispan-operator
closed
Remove duplicate code from testsuite
Refactoring test
A refactoring is needed to remove duplicate code introduced with multinamespace test. i.e [waitForPodsOrFail](https://github.com/infinispan/infinispan-operator/blob/23e3886b77b693cc30cdad9ffbd6f1cb7665ad6e/test/e2e/multinamespace/multinamespace_test.go#L86) and following
1.0
Remove duplicate code from testsuite - A refactoring is needed to remove duplicate code introduced with multinamespace test. i.e [waitForPodsOrFail](https://github.com/infinispan/infinispan-operator/blob/23e3886b77b693cc30cdad9ffbd6f1cb7665ad6e/test/e2e/multinamespace/multinamespace_test.go#L86) and following
non_main
remove duplicate code from testsuite a refactoring is needed to remove duplicate code introduced with multinamespace test i e and following
0
732,236
25,250,739,945
IssuesEvent
2022-11-15 14:30:58
mozilla/addons-linter
https://api.github.com/repos/mozilla/addons-linter
closed
Add a link to MDN doc page as part of the description of MANIFEST_INSTALL_ORIGINS error message
component: rule priority: p3
As part of https://github.com/mozilla/addons-linter/issues/4061 we're going to add more specific error messages for errors related to `install_origins`. Ideally we'd have a link to a MDN page, but that MDN page doesn't exist yet and needs to be created first. Once it exists, we can go back and edit the `description` or `MANIFEST_INSTALL_ORIGINS` to include the link to it.
1.0
Add a link to MDN doc page as part of the description of MANIFEST_INSTALL_ORIGINS error message - As part of https://github.com/mozilla/addons-linter/issues/4061 we're going to add more specific error messages for errors related to `install_origins`. Ideally we'd have a link to a MDN page, but that MDN page doesn't exist yet and needs to be created first. Once it exists, we can go back and edit the `description` or `MANIFEST_INSTALL_ORIGINS` to include the link to it.
non_main
add a link to mdn doc page as part of the description of manifest install origins error message as part of we re going to add more specific error messages for errors related to install origins ideally we d have a link to a mdn page but that mdn page doesn t exist yet and needs to be created first once it exists we can go back and edit the description or manifest install origins to include the link to it
0
4,433
23,042,583,286
IssuesEvent
2022-07-23 11:33:56
Lissy93/dashy
https://api.github.com/repos/Lissy93/dashy
closed
[FEATURE_REQUEST] Designer TLD Support
๐Ÿฆ„ Feature Request ๐Ÿ‘ค Awaiting Maintainer Response
### Is your feature request related to a problem? If so, please describe. Yes, I get this error when trying to monitor my custom domain (jon.irish): "An error occurred, see the logs for more info. Whois server not yet supported. We are adding new servers on a weekly basis." I assume that this error will go away when whois support is added? If not, can we get support for designer TLD's? ### Describe the solution you'd like Support for designer TLD's ### Priority Low (Nice-to-have) ### Is this something you would be keen to implement _No response_
True
[FEATURE_REQUEST] Designer TLD Support - ### Is your feature request related to a problem? If so, please describe. Yes, I get this error when trying to monitor my custom domain (jon.irish): "An error occurred, see the logs for more info. Whois server not yet supported. We are adding new servers on a weekly basis." I assume that this error will go away when whois support is added? If not, can we get support for designer TLD's? ### Describe the solution you'd like Support for designer TLD's ### Priority Low (Nice-to-have) ### Is this something you would be keen to implement _No response_
main
designer tld support is your feature request related to a problem if so please describe yes i get this error when trying to monitor my custom domain jon irish an error occurred see the logs for more info whois server not yet supported we are adding new servers on a weekly basis i assume that this error will go away when whois support is added if not can we get support for designer tld s describe the solution you d like support for designer tld s priority low nice to have is this something you would be keen to implement no response
1
90,564
11,419,748,056
IssuesEvent
2020-02-03 08:42:35
undercasetype/fraunces-minisite
https://api.github.com/repos/undercasetype/fraunces-minisite
closed
Thank You For Shopping: think of alternative for the floating labels
needs-design
<img width="1059" alt="image" src="https://user-images.githubusercontent.com/4570664/73055429-11e8c600-3e8d-11ea-9de3-d5cbc8b69ac8.png">
1.0
Thank You For Shopping: think of alternative for the floating labels - <img width="1059" alt="image" src="https://user-images.githubusercontent.com/4570664/73055429-11e8c600-3e8d-11ea-9de3-d5cbc8b69ac8.png">
non_main
thank you for shopping think of alternative for the floating labels img width alt image src
0
107,858
23,493,546,443
IssuesEvent
2022-08-17 21:24:53
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
closed
UI bug โ€“ small time interval differences fail to make key show both values on hover
webapp ux team/code-insights backend
![CleanShot 2022-02-03 at 12 36 35](https://user-images.githubusercontent.com/11967660/152416883-10b0a574-9c82-41a1-9e60-906db620cb75.gif) Notice that these datapoints have the same time but show only 1 value (not 2 values) in the key because it's treating them as separate time points. > Here is an example, these two series have snapshots as follows: "2022-02-03T19:34:22Z" and "2022-02-03T19:34:21Z"
1.0
UI bug โ€“ small time interval differences fail to make key show both values on hover - ![CleanShot 2022-02-03 at 12 36 35](https://user-images.githubusercontent.com/11967660/152416883-10b0a574-9c82-41a1-9e60-906db620cb75.gif) Notice that these datapoints have the same time but show only 1 value (not 2 values) in the key because it's treating them as separate time points. > Here is an example, these two series have snapshots as follows: "2022-02-03T19:34:22Z" and "2022-02-03T19:34:21Z"
non_main
ui bug โ€“ small time interval differences fail to make key show both values on hover notice that these datapoints have the same time but show only value not values in the key because it s treating them as separate time points here is an example these two series have snapshots as follows and
0
1,675
6,574,105,303
IssuesEvent
2017-09-11 11:30:38
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ios_command: Weird stdout & missing results[].cli_command field when a pipe is used
affects_2.3 bug_report networking waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_command but the issue might be caused by another module such as "include_role" for instance. ##### ANSIBLE VERSION ``` ansible --version 2.3.0 (commit 20161123.089ffae) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - **Local host**: Ubuntu 16.10 4.8 - **Target nodes**: IOSv_L2 15.2(4.0.55)E ##### SUMMARY This issue happens only when using a '|' pipe within a show command ('which is legit on Cisco CLI). No such weird stdout happens when the same command is used directly on the device CLI. Also, the results[].cli_command field is always present when the show command does not include a pipe. We can verify with results[].invocation.module_args.commands[0 that the right show command is sent to the device. ##### STEPS TO REPRODUCE roles/ios_pull_tables/tasks/main.yml ``` ... - name: Including the right module include: "PACL_Table.yml" ``` roles/ios_pull_tables/tasks/PACL_Table.yml ``` - name: Fetching interfaces facts from the remote node ios_facts: gather_subset: interfaces provider: "{{ connections.ssh }}" register: facts - name: Fetching PACL_Table on all L2 interfaces from the remote node ios_command: provider: "{{ connections.ssh }}" commands: - "show run interface {{ net_item.key }} | include ^interface|access-group" with_dict: "{{ ansible_net_interfaces }}" when: net_item.value.ipv4.address is not defined loop_control: loop_var: net_item register: table - name: Saving the fetched table locally include_role: name: save_table ``` roles/save_table/tasks/main.yml (with item=PACL_Table) ``` - name: Printing the returned table(s) debug: var=table.results ... - name: Saving "{{ item }}" into local file blockinfile: dest: "{{ dest_file }}" create: yes block: '{{ stdout_item.stdout[0] }}' marker: "<--- {mark} {{item}} fetched with {{ stdout_item.cli_command }} --->" insertafter: EOF with_items: "{{ table.results }}" loop_control: loop_var: stdout_item ignore_errors: yes ``` ##### EXPECTED RESULTS The results[].stdout[0] should contain the result of the previous show command. The results[].cli_command field should be accessed without issue allowing "PACL_Table" to be saved without error in this example ##### ACTUAL RESULTS ``` ... TASK [save_table : Printing the returned table(s)] ************************************************************************************************************************** ok: [IOSv_L2_10] => { "table.results": [ { "_ansible_item_label": { "key": "GigabitEthernet1/2", "value": { "bandwidth": 1000000, "description": "Connected to [u'IOSv_Leaf_16.actionmystique.net'] on its port [u'Gi0/0']", "duplex": "Full", "ipv4": null, "lineprotocol": "up (connected) ", "macaddress": "0036.2586.7e06", "mediatype": "unknown media type", "mtu": 1500, "operstatus": "up", "type": "iGbE" } }, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "invocation": { "module_args": { "auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "authorize": true, "commands": [ "show run interface GigabitEthernet1/2 | include ^interface|access-group" ], "host": "172.21.100.210", "interval": 1, "match": "all", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": 22, "provider": { "auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "authorize": true, "host": "172.21.100.210", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": 22, "ssh_keyfile": "~/.ssh/id_rsa", "timeout": 10, "transport": "cli", "username": "admin", "version": 2 }, "retries": 10, "ssh_keyfile": "/root/.ssh/id_rsa", "timeout": 10, "transport": "cli", "use_ssl": true, "username": "admin", "validate_certs": true, "wait_for": null }, "module_name": "ios_command" }, "net_item": { "key": "GigabitEthernet1/2", "value": { "bandwidth": 1000000, "description": "Connected to [u'IOSv_Leaf_16.actionmystique.net'] on its port [u'Gi0/0']", "duplex": "Full", "ipv4": null, "lineprotocol": "up (connected) ", "macaddress": "0036.2586.7e06", "mediatype": "unknown media type", "mtu": 1500, "operstatus": "up", "type": "iGbE" } }, "stdout": [ "show run interface GigabitEthernet1/2 | include ^interface|access-$terface GigabitEthernet1/2 | include ^interface|access-g roup\ninterface GigabitEthernet1/2" ], "stdout_lines": [ [ "show run interface GigabitEthernet1/2 | include ^interface|access-$terface GigabitEthernet1/2 | include ^interface|access-g roup", "interface GigabitEthernet1/2" ] ], "warnings": [] }, ... TASK [save_table : Saving "PACL_Table" into local file] ********************************************************************************************************************* fatal: [IOSv_L2_10]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'cli_command'\n\nThe error appears to have been in '/home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/save_table/tasks/main.yml': line 80, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Saving \"{{ item }}\" into local file\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n"} ``` We can see that: - **results[].stdout[0] is very strange**: it contains the command which is repeated & twisted several times, without any result. - **'dict object' has no attribute 'cli_command'** Despite a correct stdout on the device CLI: ``` IOSv_L2_10#show run interface G1/2 | include ^interface|access-group interface GigabitEthernet1/2 ip access-group ip_acl in mac access-group mac_acl in ```
True
ios_command: Weird stdout & missing results[].cli_command field when a pipe is used - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ios_command but the issue might be caused by another module such as "include_role" for instance. ##### ANSIBLE VERSION ``` ansible --version 2.3.0 (commit 20161123.089ffae) config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION inventory = ./hosts gathering = explicit roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/Roles/roles private_role_vars = yes log_path = /var/log/ansible.log fact_caching = redis fact_caching_timeout = 86400 retry_files_enabled = False ##### OS / ENVIRONMENT - **Local host**: Ubuntu 16.10 4.8 - **Target nodes**: IOSv_L2 15.2(4.0.55)E ##### SUMMARY This issue happens only when using a '|' pipe within a show command ('which is legit on Cisco CLI). No such weird stdout happens when the same command is used directly on the device CLI. Also, the results[].cli_command field is always present when the show command does not include a pipe. We can verify with results[].invocation.module_args.commands[0 that the right show command is sent to the device. ##### STEPS TO REPRODUCE roles/ios_pull_tables/tasks/main.yml ``` ... - name: Including the right module include: "PACL_Table.yml" ``` roles/ios_pull_tables/tasks/PACL_Table.yml ``` - name: Fetching interfaces facts from the remote node ios_facts: gather_subset: interfaces provider: "{{ connections.ssh }}" register: facts - name: Fetching PACL_Table on all L2 interfaces from the remote node ios_command: provider: "{{ connections.ssh }}" commands: - "show run interface {{ net_item.key }} | include ^interface|access-group" with_dict: "{{ ansible_net_interfaces }}" when: net_item.value.ipv4.address is not defined loop_control: loop_var: net_item register: table - name: Saving the fetched table locally include_role: name: save_table ``` roles/save_table/tasks/main.yml (with item=PACL_Table) ``` - name: Printing the returned table(s) debug: var=table.results ... - name: Saving "{{ item }}" into local file blockinfile: dest: "{{ dest_file }}" create: yes block: '{{ stdout_item.stdout[0] }}' marker: "<--- {mark} {{item}} fetched with {{ stdout_item.cli_command }} --->" insertafter: EOF with_items: "{{ table.results }}" loop_control: loop_var: stdout_item ignore_errors: yes ``` ##### EXPECTED RESULTS The results[].stdout[0] should contain the result of the previous show command. The results[].cli_command field should be accessed without issue allowing "PACL_Table" to be saved without error in this example ##### ACTUAL RESULTS ``` ... TASK [save_table : Printing the returned table(s)] ************************************************************************************************************************** ok: [IOSv_L2_10] => { "table.results": [ { "_ansible_item_label": { "key": "GigabitEthernet1/2", "value": { "bandwidth": 1000000, "description": "Connected to [u'IOSv_Leaf_16.actionmystique.net'] on its port [u'Gi0/0']", "duplex": "Full", "ipv4": null, "lineprotocol": "up (connected) ", "macaddress": "0036.2586.7e06", "mediatype": "unknown media type", "mtu": 1500, "operstatus": "up", "type": "iGbE" } }, "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "invocation": { "module_args": { "auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "authorize": true, "commands": [ "show run interface GigabitEthernet1/2 | include ^interface|access-group" ], "host": "172.21.100.210", "interval": 1, "match": "all", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": 22, "provider": { "auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "authorize": true, "host": "172.21.100.210", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": 22, "ssh_keyfile": "~/.ssh/id_rsa", "timeout": 10, "transport": "cli", "username": "admin", "version": 2 }, "retries": 10, "ssh_keyfile": "/root/.ssh/id_rsa", "timeout": 10, "transport": "cli", "use_ssl": true, "username": "admin", "validate_certs": true, "wait_for": null }, "module_name": "ios_command" }, "net_item": { "key": "GigabitEthernet1/2", "value": { "bandwidth": 1000000, "description": "Connected to [u'IOSv_Leaf_16.actionmystique.net'] on its port [u'Gi0/0']", "duplex": "Full", "ipv4": null, "lineprotocol": "up (connected) ", "macaddress": "0036.2586.7e06", "mediatype": "unknown media type", "mtu": 1500, "operstatus": "up", "type": "iGbE" } }, "stdout": [ "show run interface GigabitEthernet1/2 | include ^interface|access-$terface GigabitEthernet1/2 | include ^interface|access-g roup\ninterface GigabitEthernet1/2" ], "stdout_lines": [ [ "show run interface GigabitEthernet1/2 | include ^interface|access-$terface GigabitEthernet1/2 | include ^interface|access-g roup", "interface GigabitEthernet1/2" ] ], "warnings": [] }, ... TASK [save_table : Saving "PACL_Table" into local file] ********************************************************************************************************************* fatal: [IOSv_L2_10]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'cli_command'\n\nThe error appears to have been in '/home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/save_table/tasks/main.yml': line 80, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Saving \"{{ item }}\" into local file\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n"} ``` We can see that: - **results[].stdout[0] is very strange**: it contains the command which is repeated & twisted several times, without any result. - **'dict object' has no attribute 'cli_command'** Despite a correct stdout on the device CLI: ``` IOSv_L2_10#show run interface G1/2 | include ^interface|access-group interface GigabitEthernet1/2 ip access-group ip_acl in mac access-group mac_acl in ```
main
ios command weird stdout missing results cli command field when a pipe is used issue type bug report component name ios command but the issue might be caused by another module such as include role for instance ansible version ansible version commit config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible git ansible roles roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment local host ubuntu target nodes iosv e summary this issue happens only when using a pipe within a show command which is legit on cisco cli no such weird stdout happens when the same command is used directly on the device cli also the results cli command field is always present when the show command does not include a pipe we can verify with results invocation module args commands that the right show command is sent to the device steps to reproduce roles ios pull tables tasks main yml name including the right module include pacl table yml roles ios pull tables tasks pacl table yml name fetching interfaces facts from the remote node ios facts gather subset interfaces provider connections ssh register facts name fetching pacl table on all interfaces from the remote node ios command provider connections ssh commands show run interface net item key include interface access group with dict ansible net interfaces when net item value address is not defined loop control loop var net item register table name saving the fetched table locally include role name save table roles save table tasks main yml with item pacl table name printing the returned table s debug var table results name saving item into local file blockinfile dest dest file create yes block stdout item stdout marker insertafter eof with items table results loop control loop var stdout item ignore errors yes expected results the results stdout should contain the result of the previous show command the results cli command field should be accessed without issue allowing pacl table to be saved without error in this example actual results task ok table results ansible item label key value bandwidth description connected to on its port duplex full null lineprotocol up connected macaddress mediatype unknown media type mtu operstatus up type igbe ansible item result true ansible no log false ansible parsed true changed false invocation module args auth pass value specified in no log parameter authorize true commands show run interface include interface access group host interval match all password value specified in no log parameter port provider auth pass value specified in no log parameter authorize true host password value specified in no log parameter port ssh keyfile ssh id rsa timeout transport cli username admin version retries ssh keyfile root ssh id rsa timeout transport cli use ssl true username admin validate certs true wait for null module name ios command net item key value bandwidth description connected to on its port duplex full null lineprotocol up connected macaddress mediatype unknown media type mtu operstatus up type igbe stdout show run interface include interface access terface include interface access g roup ninterface stdout lines show run interface include interface access terface include interface access g roup interface warnings task fatal failed failed true msg the field args has an invalid value which appears to include a variable that is undefined the error was dict object has no attribute cli command n nthe error appears to have been in home actionmystique program files ubuntu ansible roles roles save table tasks main yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n n name saving item into local file n here nwe could be wrong but this one looks like it might be an issue with nmissing quotes always quote template expression brackets when they nstart a value for instance n n with items n foo n nshould be written as n n with items n foo n we can see that results stdout is very strange it contains the command which is repeated twisted several times without any result dict object has no attribute cli command despite a correct stdout on the device cli iosv show run interface include interface access group interface ip access group ip acl in mac access group mac acl in
1
5,200
26,437,506,604
IssuesEvent
2023-01-15 15:20:15
python-restx/flask-restx
https://api.github.com/repos/python-restx/flask-restx
opened
Flask-RestX v1.0.5 release & breathing some new life into this project!
maintainers
Afternoon `flask-restx` community! With thanks to @jdieter and @ziirish all the compatibility warnings with `flask` 2.3 should be gone in `flask-restx` v1.0.5 which I have just released (hopefully :crossed_fingers:)! If not, please open an issue and let me know! Aside from keeping up to date with upstream changes in `flask` and general maintenance, I would like to breathe some new life into this project. With that in mind, I would like to ask you, the community, who use this project, what your top priorities for bug fixes and new features or changes are. Big or small, things that don't work or things that you just don't like. I can't promise I will have time to fix or implement all of them, but I do promise to read every suggestion and engage with you. I will also make an effort to review and integrate PRs, so if anyone wants to become a contributor, I would encourage you to do so! Anything related to model redesign can go over on #59, but for anything else, please comment below.
True
Flask-RestX v1.0.5 release & breathing some new life into this project! - Afternoon `flask-restx` community! With thanks to @jdieter and @ziirish all the compatibility warnings with `flask` 2.3 should be gone in `flask-restx` v1.0.5 which I have just released (hopefully :crossed_fingers:)! If not, please open an issue and let me know! Aside from keeping up to date with upstream changes in `flask` and general maintenance, I would like to breathe some new life into this project. With that in mind, I would like to ask you, the community, who use this project, what your top priorities for bug fixes and new features or changes are. Big or small, things that don't work or things that you just don't like. I can't promise I will have time to fix or implement all of them, but I do promise to read every suggestion and engage with you. I will also make an effort to review and integrate PRs, so if anyone wants to become a contributor, I would encourage you to do so! Anything related to model redesign can go over on #59, but for anything else, please comment below.
main
flask restx release breathing some new life into this project afternoon flask restx community with thanks to jdieter and ziirish all the compatibility warnings with flask should be gone in flask restx which i have just released hopefully crossed fingers if not please open an issue and let me know aside from keeping up to date with upstream changes in flask and general maintenance i would like to breathe some new life into this project with that in mind i would like to ask you the community who use this project what your top priorities for bug fixes and new features or changes are big or small things that don t work or things that you just don t like i can t promise i will have time to fix or implement all of them but i do promise to read every suggestion and engage with you i will also make an effort to review and integrate prs so if anyone wants to become a contributor i would encourage you to do so anything related to model redesign can go over on but for anything else please comment below
1
41,134
16,624,761,774
IssuesEvent
2021-06-03 08:10:41
EBISPOT/goci
https://api.github.com/repos/EBISPOT/goci
closed
Scripts to convert VCF to TSV format (GWAS and OTAR)
SumStats Service Type: Task
the openGWAS/MRBase teams have shared >20 cancer GWAS summary statistics in VCF format. Scripts to convert VCFs to TSVs format are needed to process these datasets for integration into the GWAS Catalog and Open Targets Genetics databases. Example VCF file preanalysis/buniello/ieu-b-94.vcf
1.0
Scripts to convert VCF to TSV format (GWAS and OTAR) - the openGWAS/MRBase teams have shared >20 cancer GWAS summary statistics in VCF format. Scripts to convert VCFs to TSVs format are needed to process these datasets for integration into the GWAS Catalog and Open Targets Genetics databases. Example VCF file preanalysis/buniello/ieu-b-94.vcf
non_main
scripts to convert vcf to tsv format gwas and otar the opengwas mrbase teams have shared cancer gwas summary statistics in vcf format scripts to convert vcfs to tsvs format are needed to process these datasets for integration into the gwas catalog and open targets genetics databases example vcf file preanalysis buniello ieu b vcf
0
5,435
27,245,567,209
IssuesEvent
2023-02-22 01:31:37
NIAEFEUP/website-niaefeup-backend
https://api.github.com/repos/NIAEFEUP/website-niaefeup-backend
closed
lint: Create configuration file
maintainability
Currently, we're using the default configuration of ktlint in Github's action and most of the team members are also using the ktlint plugin in IntelliJ. However, the default of the plugin is different from what is running in the actions. We should have a consistent configuration in the repo that's used by the plugin and in the actions.
True
lint: Create configuration file - Currently, we're using the default configuration of ktlint in Github's action and most of the team members are also using the ktlint plugin in IntelliJ. However, the default of the plugin is different from what is running in the actions. We should have a consistent configuration in the repo that's used by the plugin and in the actions.
main
lint create configuration file currently we re using the default configuration of ktlint in github s action and most of the team members are also using the ktlint plugin in intellij however the default of the plugin is different from what is running in the actions we should have a consistent configuration in the repo that s used by the plugin and in the actions
1
58,926
11,911,642,753
IssuesEvent
2020-03-31 08:58:04
ModellingWebLab/weblab-fc
https://api.github.com/repos/ModellingWebLab/weblab-fc
closed
Cut dependency on pycml
code-and-design install
At present running an experiment requires pycml to generate the manipulated model. This will change to use cellmlmanip & fccodegen. - [x] As a first step, put a manually generated (using fccodegen) model in the tests and see if we can just simulate it. - [x] Then call this automatically rather than calling pycml. - [x] Then iteratively implement model manipulations. See #23.
1.0
Cut dependency on pycml - At present running an experiment requires pycml to generate the manipulated model. This will change to use cellmlmanip & fccodegen. - [x] As a first step, put a manually generated (using fccodegen) model in the tests and see if we can just simulate it. - [x] Then call this automatically rather than calling pycml. - [x] Then iteratively implement model manipulations. See #23.
non_main
cut dependency on pycml at present running an experiment requires pycml to generate the manipulated model this will change to use cellmlmanip fccodegen as a first step put a manually generated using fccodegen model in the tests and see if we can just simulate it then call this automatically rather than calling pycml then iteratively implement model manipulations see
0
229,628
17,574,027,231
IssuesEvent
2021-08-15 08:44:07
Pressio/pressio
https://api.github.com/repos/Pressio/pressio
closed
docs: make the html page wider
documentation
@kennychowdhary can you please look at the css template and figure out how to make the content in the html doc pages a bit wider? the view is very small horizontally now. it would help to make the pages wider, but not too much
1.0
docs: make the html page wider - @kennychowdhary can you please look at the css template and figure out how to make the content in the html doc pages a bit wider? the view is very small horizontally now. it would help to make the pages wider, but not too much
non_main
docs make the html page wider kennychowdhary can you please look at the css template and figure out how to make the content in the html doc pages a bit wider the view is very small horizontally now it would help to make the pages wider but not too much
0
1,706
6,574,416,453
IssuesEvent
2017-09-11 12:49:13
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
mysql_user support for FUNCTION and PROCEDURE privileges
affects_2.2 feature_idea waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Currently only `TABLE` privileges can be manipulated with `mysql_user` Granting execute privileges on a mysql `FUNCTION` requires an SQL statement of the form ``` GRANT EXECUTE ON FUNCTION dbname.function_name TO 'user'; ``` Unfortunately if the `FUNCTION` keyword is included in `mysql_user` modules's `priv` parameter it is not recognized as a valid privilege level. Object types of `FUNCTION` and `PROCEDURE` are supported by `mysql` (http://dev.mysql.com/doc/refman/5.7/en/grant.html) and it would be nice if the `priv` parameter supported specifying 'object_type', so that task like the following could be executed ``` - mysql_user: user: db_user priv: FUNCTION dbname.function_name:EXECUTE state: present ```
True
mysql_user support for FUNCTION and PROCEDURE privileges - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME mysql_user ##### ANSIBLE VERSION ``` ansible 2.2.0.0 ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### SUMMARY Currently only `TABLE` privileges can be manipulated with `mysql_user` Granting execute privileges on a mysql `FUNCTION` requires an SQL statement of the form ``` GRANT EXECUTE ON FUNCTION dbname.function_name TO 'user'; ``` Unfortunately if the `FUNCTION` keyword is included in `mysql_user` modules's `priv` parameter it is not recognized as a valid privilege level. Object types of `FUNCTION` and `PROCEDURE` are supported by `mysql` (http://dev.mysql.com/doc/refman/5.7/en/grant.html) and it would be nice if the `priv` parameter supported specifying 'object_type', so that task like the following could be executed ``` - mysql_user: user: db_user priv: FUNCTION dbname.function_name:EXECUTE state: present ```
main
mysql user support for function and procedure privileges issue type feature idea component name mysql user ansible version ansible configuration n a os environment n a summary currently only table privileges can be manipulated with mysql user granting execute privileges on a mysql function requires an sql statement of the form grant execute on function dbname function name to user unfortunately if the function keyword is included in mysql user modules s priv parameter it is not recognized as a valid privilege level object types of function and procedure are supported by mysql and it would be nice if the priv parameter supported specifying object type so that task like the following could be executed mysql user user db user priv function dbname function name execute state present
1
3,107
11,868,505,198
IssuesEvent
2020-03-26 09:19:04
chocolatey-community/chocolatey-package-requests
https://api.github.com/repos/chocolatey-community/chocolatey-package-requests
closed
RFM - Enterprise Architect Viewer
Status: Available For Maintainer(s)
## Current Maintainer - [x] I am the maintainer of the package and wish to pass it to someone else; ## Checklist - [x] Issue title starts with 'RFM - ' ## Existing Package Details Package URL: https://chocolatey.org/packages/ealite Package source URL: https://github.com/abejenaru/chocolatey-packages/tree/master/automatic/ealite
True
RFM - Enterprise Architect Viewer - ## Current Maintainer - [x] I am the maintainer of the package and wish to pass it to someone else; ## Checklist - [x] Issue title starts with 'RFM - ' ## Existing Package Details Package URL: https://chocolatey.org/packages/ealite Package source URL: https://github.com/abejenaru/chocolatey-packages/tree/master/automatic/ealite
main
rfm enterprise architect viewer current maintainer i am the maintainer of the package and wish to pass it to someone else checklist issue title starts with rfm existing package details package url package source url
1
3,698
15,098,528,711
IssuesEvent
2021-02-07 23:06:58
afgalvan/create-app
https://api.github.com/repos/afgalvan/create-app
closed
Fix 6 Maintainability, 1 Style issues in multiple files
maintainability style
[CodeFactor](https://www.codefactor.io/repository/github/afgalvan/create-app) found multiple issues last seen at 6a288e9a59a31ec0b1952f0ee969f2331bb14564: #### Use $(...) notation instead of legacy backticked `...`. [test\test.sh:94 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L94)[test\test.sh:93 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L93)[test\test.sh:85 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L85)[test\test.sh:84 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L84)[test\test.sh:75 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L75)[test\test.sh:74 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L74) #### Check exit code directly with e.g. &#39;if mycmd;&#39;, not indirectly with $?. [scripts\pre-commit.sh:6 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/scripts/pre-commit.sh#L6)
True
Fix 6 Maintainability, 1 Style issues in multiple files - [CodeFactor](https://www.codefactor.io/repository/github/afgalvan/create-app) found multiple issues last seen at 6a288e9a59a31ec0b1952f0ee969f2331bb14564: #### Use $(...) notation instead of legacy backticked `...`. [test\test.sh:94 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L94)[test\test.sh:93 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L93)[test\test.sh:85 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L85)[test\test.sh:84 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L84)[test\test.sh:75 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L75)[test\test.sh:74 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/test/test.sh#L74) #### Check exit code directly with e.g. &#39;if mycmd;&#39;, not indirectly with $?. [scripts\pre-commit.sh:6 ](https://www.codefactor.io/repository/github/afgalvan/create-app/source/main/scripts/pre-commit.sh#L6)
main
fix maintainability style issues in multiple files found multiple issues last seen at use notation instead of legacy backticked test test sh check exit code directly with e g if mycmd not indirectly with scripts pre commit sh
1
626
4,146,917,123
IssuesEvent
2016-06-15 03:14:51
Microsoft/DirectXTK
https://api.github.com/repos/Microsoft/DirectXTK
closed
Remove VS 2012 adapter code
maintainence
As part of dropping VS 2012 projects and Windows phone 8.0 support, can clean up the following code * Remove C4005 disable for ``stdint.h`` (workaround for bug with VS 2010 + Windows 7 SDK) * Remove C4481 disable for "override is an extension" (workaround for VS 2010 bug) * Remove ``DIRECTX_STD_CALLCONV`` std::function workaround for VS 2012 * Remove ``DIRECTX_CTOR_DEFAULT`` / ``DIRECTX_CTOR_DELETE`` macros and just use =default, =delete directly (VS 2013 or later supports this) * Remove DirectXMath 3.03 adapters for 3.06 constructs (workaround for Windows 8.0 SDK) * Make use of ``std::make_unique<>`` (C++14 draft feature supported in VS 2013) * Remove some guarded code patterns for Windows XP (i.e. functions that were added to Windows Vista) * Make consistent use of ``= {}`` to initialize memory to zero (C++11 brace init behavior fixed in VS 2013) * Remove legacy ``WCHAR`` Win32 type and use ``wchar_t`` * Remove guards around use of WIC (Windows phone 8.0 lacked WIC support)
True
Remove VS 2012 adapter code - As part of dropping VS 2012 projects and Windows phone 8.0 support, can clean up the following code * Remove C4005 disable for ``stdint.h`` (workaround for bug with VS 2010 + Windows 7 SDK) * Remove C4481 disable for "override is an extension" (workaround for VS 2010 bug) * Remove ``DIRECTX_STD_CALLCONV`` std::function workaround for VS 2012 * Remove ``DIRECTX_CTOR_DEFAULT`` / ``DIRECTX_CTOR_DELETE`` macros and just use =default, =delete directly (VS 2013 or later supports this) * Remove DirectXMath 3.03 adapters for 3.06 constructs (workaround for Windows 8.0 SDK) * Make use of ``std::make_unique<>`` (C++14 draft feature supported in VS 2013) * Remove some guarded code patterns for Windows XP (i.e. functions that were added to Windows Vista) * Make consistent use of ``= {}`` to initialize memory to zero (C++11 brace init behavior fixed in VS 2013) * Remove legacy ``WCHAR`` Win32 type and use ``wchar_t`` * Remove guards around use of WIC (Windows phone 8.0 lacked WIC support)
main
remove vs adapter code as part of dropping vs projects and windows phone support can clean up the following code remove disable for stdint h workaround for bug with vs windows sdk remove disable for override is an extension workaround for vs bug remove directx std callconv std function workaround for vs remove directx ctor default directx ctor delete macros and just use default delete directly vs or later supports this remove directxmath adapters for constructs workaround for windows sdk make use of std make unique c draft feature supported in vs remove some guarded code patterns for windows xp i e functions that were added to windows vista make consistent use of to initialize memory to zero c brace init behavior fixed in vs remove legacy wchar type and use wchar t remove guards around use of wic windows phone lacked wic support
1
302,867
22,909,298,108
IssuesEvent
2022-07-16 03:17:22
tunedin-ctrl/Hackathon22
https://api.github.com/repos/tunedin-ctrl/Hackathon22
opened
Kafka Admin client for making topics
documentation enhancement
# Kafka Admin - Makes topics that you can produce to and consume from - Wrapper for KafkaAdminClient class
1.0
Kafka Admin client for making topics - # Kafka Admin - Makes topics that you can produce to and consume from - Wrapper for KafkaAdminClient class
non_main
kafka admin client for making topics kafka admin makes topics that you can produce to and consume from wrapper for kafkaadminclient class
0
1,496
6,478,927,151
IssuesEvent
2017-08-18 09:15:26
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
azure_rm_virualmashine issue
affects_2.1 azure bug_report cloud waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> azure_rm_virtualmachine module ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible-2.1.0.0-1.fc23.noarch ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> Python 2.7.11 Modules: azure (2.0.0rc5) ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say โ€œN/Aโ€ for anything that is not platform-specific. --> fedora 23 ##### SUMMARY <!--- Explain the problem briefly --> ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - hosts: localhost connection: local gather_facts: false become: false vars_files: # - environments/Azure/azure_credentials_encrypted.yml - ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml vars: roles: - create_azure_vm And roles/create_azure_vm/main.yml - name: Create VM with defaults azure_rm_virtualmachine: resource_group: Testing name: testvm10 admin_username: test_user admin_password: test_vm image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> creatiion of VM. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`" && echo ansible-tmp-1470326423.51-208881287834045="`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf "/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1284, in <module> main() File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1281, in main AzureRMVirtualMachine() File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 487, in **init** for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "azure_rm_virtualmachine"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1284, in <module>\n main()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1281, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 487, in **init**\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 <!--- Paste verbatim command output between quotes below --> ``` PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" && echo ansible-tmp-1470326423.51-208881287834045="` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf "/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1284, in <module> main() File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1281, in main AzureRMVirtualMachine() File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "azure_rm_virtualmachine"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1284, in <module>\n main()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1281, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ```
True
azure_rm_virualmashine issue - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report - Feature Idea - Documentation Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> azure_rm_virtualmachine module ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible-2.1.0.0-1.fc23.noarch ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> Python 2.7.11 Modules: azure (2.0.0rc5) ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say โ€œN/Aโ€ for anything that is not platform-specific. --> fedora 23 ##### SUMMARY <!--- Explain the problem briefly --> ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - hosts: localhost connection: local gather_facts: false become: false vars_files: # - environments/Azure/azure_credentials_encrypted.yml - ../../inventory/environments/Azure/azure_credentials_encrypted_temp_passwd.yml vars: roles: - create_azure_vm And roles/create_azure_vm/main.yml - name: Create VM with defaults azure_rm_virtualmachine: resource_group: Testing name: testvm10 admin_username: test_user admin_password: test_vm image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> creatiion of VM. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`" && echo ansible-tmp-1470326423.51-208881287834045="`echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045`" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf "/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1284, in <module> main() File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1281, in main AzureRMVirtualMachine() File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 487, in **init** for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "azure_rm_virtualmachine"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1284, in <module>\n main()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1281, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 487, in **init**\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 <!--- Paste verbatim command output between quotes below --> ``` PLAYBOOK: provision_azure_playbook.yml ***************************************** 1 plays in provision_azure_playbook.yml PLAY [localhost] *************************************************************** TASK [create_azure_vm : Create VM with defaults] ******************************* task path: /ansible/ansible_home/roles/create_azure_vm/tasks/main.yml:3 <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: snemirovsky <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" && echo ansible-tmp-1470326423.51-208881287834045="` echo $HOME/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045 `" ) && sleep 0' <127.0.0.1> PUT /tmp/tmpiYFkuQ TO /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine <127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/azure_rm_virtualmachine; rm -rf "/home/snemirovsky/.ansible/tmp/ansible-tmp-1470326423.51-208881287834045/" > /dev/null 2>&1 && sleep 0' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1284, in <module> main() File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 1281, in main AzureRMVirtualMachine() File "/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py", line 487, in __init__ for key in VirtualMachineSizeTypes: NameError: global name 'VirtualMachineSizeTypes' is not defined fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "azure_rm_virtualmachine"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1284, in <module>\n main()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 1281, in main\n AzureRMVirtualMachine()\n File \"/tmp/ansible_Xf2rTh/ansible_module_azure_rm_virtualmachine.py\", line 487, in __init__\n for key in VirtualMachineSizeTypes:\nNameError: global name 'VirtualMachineSizeTypes' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @provision_azure_playbook.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ```
main
azure rm virualmashine issue issue type bug report feature idea documentation report component name azure rm virtualmachine module ansible version ansible noarch configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables python modules azure os environment mention the os you are running ansible from and the os you are managing or say โ€œn aโ€ for anything that is not platform specific fedora summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost connection local gather facts false become false vars files environments azure azure credentials encrypted yml inventory environments azure azure credentials encrypted temp passwd yml vars roles create azure vm and roles create azure vm main yml name create vm with defaults azure rm virtualmachine resource group testing name admin username test user admin password test vm image offer centos publisher openlogic sku version latest expected results creatiion of vm actual results playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed playbook provision azure playbook yml plays in provision azure playbook yml play task task path ansible ansible home roles create azure vm tasks main yml establish local connection for user snemirovsky exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpiyfkuq to home snemirovsky ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home snemirovsky ansible tmp ansible tmp azure rm virtualmachine rm rf home snemirovsky ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm virtualmachine py line in main file tmp ansible ansible module azure rm virtualmachine py line in main azurermvirtualmachine file tmp ansible ansible module azure rm virtualmachine py line in init for key in virtualmachinesizetypes nameerror global name virtualmachinesizetypes is not defined fatal failed changed false failed true invocation module name azure rm virtualmachine module stderr traceback most recent call last n file tmp ansible ansible module azure rm virtualmachine py line in n main n file tmp ansible ansible module azure rm virtualmachine py line in main n azurermvirtualmachine n file tmp ansible ansible module azure rm virtualmachine py line in init n for key in virtualmachinesizetypes nnameerror global name virtualmachinesizetypes is not defined n module stdout msg module failure parsed false no more hosts left to retry use limit provision azure playbook retry play recap localhost ok changed unreachable failed
1
1,696
6,574,217,671
IssuesEvent
2017-09-11 12:00:58
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
nxos_command fails with CLI Error when using the src option
affects_2.3 bug_report networking waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> nxos_config ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.3.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> Using defaults ##### OS / ENVIRONMENT <!--- --> Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### SUMMARY <!--- Explain the problem briefly --> Sending configuration to Nexus 9K fails when using the 'src' option and NXAPI as transport. The same configuration works fine when using CLI as the transport. The command 'feature nxapi' has already been turned on manually on the target device. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` [admin@localhost testing]$ cat nxos_config_test.yml --- - hosts: evpn_leaf vars: nxapi: host: "{{ inventory_hostname }}" username: admin password: cisco transport: nxapi tasks: - name: Send configuration commands from file to switch nxos_config: provider: "{{ nxapi }}" src: config2.txt register: result Contents of config2.txt: hostname EVPN-SPINE1 ! feature ospf feature pim feature lldp feature bgp feature nv overlay nv overlay evpn ! interface loopback0 ip address 10.100.100.1/32 ip router ospf 1 area 0.0.0.0 ip pim sparse-mode ! router ospf 1 router-id 10.100.100.1 area 0.0.0.0 authentication message-digest log-adjacency-changes auto-cost reference-bandwidth 1000 Gbps ! ip pim rp-address 10.100.100.254 group-list 224.0.0.0/4 ip pim ssm range 232.0.0.0/8 ! router bgp 65000 router-id 10.100.100.1 address-family ipv4 unicast address-family l2vpn evpn retain route-target all template peer vtep-peer remote-as 65000 update-source loopback0 address-family ipv4 unicast send-community both route-reflector-client address-family l2vpn evpn send-community both route-reflector-client ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Configuration should have been applied to the target device. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> No changes are made to the target device. A "CLI execution error' is reported. <!--- Paste verbatim command output between quotes below --> ``` [admin@localhost testing]$ ansible-playbook nxos_config_test.yml -vvvv Using /home/admin/Ansible/testing/ansible.cfg as config file Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc PLAYBOOK: nxos_config_test.yml ************************************************************* 1 plays in nxos_config_test.yml PLAY [evpn_leaf] *************************************************************************** TASK [setup] ******************************************************************************* Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.255.138.13> ESTABLISH LOCAL CONNECTION FOR USER: admin <10.255.138.13> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044 `" && echo ansible-tmp-1478822435.86-88529598187044="` echo $HOME/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044 `" ) && sleep 0' <10.255.138.13> PUT /tmp/tmp3r9fGa TO /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py <10.255.138.13> EXEC /bin/sh -c 'chmod u+x /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/ /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py && sleep 0' <10.255.138.13> EXEC /bin/sh -c '/usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py; rm -rf "/home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/" > /dev/null 2>&1 && sleep 0' ok: [10.255.138.13] TASK [Send configuration commands from file to switch] ************************************* task path: /home/admin/Ansible/testing/nxos_config_test.yml:12 Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/network/nxos/nxos_config.py <10.255.138.13> ESTABLISH LOCAL CONNECTION FOR USER: admin <10.255.138.13> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173 `" && echo ansible-tmp-1478822436.45-187297994686173="` echo $HOME/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173 `" ) && sleep 0' <10.255.138.13> PUT /tmp/tmpsGsDIk TO /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py <10.255.138.13> EXEC /bin/sh -c 'chmod u+x /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/ /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py && sleep 0' <10.255.138.13> EXEC /bin/sh -c '/usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py; rm -rf "/home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/" > /dev/null 2>&1 && sleep 0' fatal: [10.255.138.13]: FAILED! => { "changed": false, "clierror": "% Invalid command\n", "code": "400", "failed": true, "invocation": { "module_args": { "after": null, "auth_pass": null, "authorize": false, "backup": false, "before": null, "config": null, "defaults": false, "force": false, "host": "10.255.138.13", "lines": null, "match": "line", "parents": null, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": null, "provider": { "host": "10.255.138.13", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "transport": "nxapi", "username": "admin" }, "replace": "line", "save": false, "src": "hostname EVPN-SPINE1\n!\nfeature ospf\nfeature pim\nfeature lldp\nfeature bgp\nfeature nv overlay\nnv overlay evpn\n!\ninterface loopback0\n ip address 10.100.100.1/32\n ip router ospf 1 area 0.0.0.0\n ip pim sparse-mode\n!\nrouter ospf 1\n router-id 10.100.100.1\n area 0.0.0.0 authentication message-digest\n log-adjacency-changes\n auto-cost reference-bandwidth 1000 Gbps\n!\nip pim rp-address 10.100.100.254 group-list 224.0.0.0/4\nip pim ssm range 232.0.0.0/8\n!\nrouter bgp 65000\n router-id 10.100.100.1\n address-family ipv4 unicast\n address-family l2vpn evpn\n retain route-target all\n template peer vtep-peer\n remote-as 65000\n update-source loopback0\n address-family ipv4 unicast\n send-community both\n route-reflector-client\n address-family l2vpn evpn\n send-community both\n route-reflector-client\n\n", "ssh_keyfile": null, "timeout": 10, "transport": "nxapi", "use_ssl": false, "username": "admin", "validate_certs": true } }, "msg": "CLI execution error", "output": { "clierror": "% Invalid command\n", "code": "400", "msg": "CLI execution error" }, "url": "http://10.255.138.13:80/ins" } to retry, use: --limit @/home/admin/Ansible/testing/nxos_config_test.retry PLAY RECAP ********************************************************************************* 10.255.138.13 : ok=1 changed=0 unreachable=0 failed=1 ```
True
nxos_command fails with CLI Error when using the src option - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> nxos_config ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.3.0 config file = /etc/ansible/ansible.cfg configured module search path = ['/usr/share/my_modules/'] ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> Using defaults ##### OS / ENVIRONMENT <!--- --> Red Hat Enterprise Linux Server release 7.3 (Maipo) ##### SUMMARY <!--- Explain the problem briefly --> Sending configuration to Nexus 9K fails when using the 'src' option and NXAPI as transport. The same configuration works fine when using CLI as the transport. The command 'feature nxapi' has already been turned on manually on the target device. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` [admin@localhost testing]$ cat nxos_config_test.yml --- - hosts: evpn_leaf vars: nxapi: host: "{{ inventory_hostname }}" username: admin password: cisco transport: nxapi tasks: - name: Send configuration commands from file to switch nxos_config: provider: "{{ nxapi }}" src: config2.txt register: result Contents of config2.txt: hostname EVPN-SPINE1 ! feature ospf feature pim feature lldp feature bgp feature nv overlay nv overlay evpn ! interface loopback0 ip address 10.100.100.1/32 ip router ospf 1 area 0.0.0.0 ip pim sparse-mode ! router ospf 1 router-id 10.100.100.1 area 0.0.0.0 authentication message-digest log-adjacency-changes auto-cost reference-bandwidth 1000 Gbps ! ip pim rp-address 10.100.100.254 group-list 224.0.0.0/4 ip pim ssm range 232.0.0.0/8 ! router bgp 65000 router-id 10.100.100.1 address-family ipv4 unicast address-family l2vpn evpn retain route-target all template peer vtep-peer remote-as 65000 update-source loopback0 address-family ipv4 unicast send-community both route-reflector-client address-family l2vpn evpn send-community both route-reflector-client ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Configuration should have been applied to the target device. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> No changes are made to the target device. A "CLI execution error' is reported. <!--- Paste verbatim command output between quotes below --> ``` [admin@localhost testing]$ ansible-playbook nxos_config_test.yml -vvvv Using /home/admin/Ansible/testing/ansible.cfg as config file Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc PLAYBOOK: nxos_config_test.yml ************************************************************* 1 plays in nxos_config_test.yml PLAY [evpn_leaf] *************************************************************************** TASK [setup] ******************************************************************************* Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py <10.255.138.13> ESTABLISH LOCAL CONNECTION FOR USER: admin <10.255.138.13> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044 `" && echo ansible-tmp-1478822435.86-88529598187044="` echo $HOME/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044 `" ) && sleep 0' <10.255.138.13> PUT /tmp/tmp3r9fGa TO /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py <10.255.138.13> EXEC /bin/sh -c 'chmod u+x /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/ /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py && sleep 0' <10.255.138.13> EXEC /bin/sh -c '/usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/setup.py; rm -rf "/home/admin/.ansible/tmp/ansible-tmp-1478822435.86-88529598187044/" > /dev/null 2>&1 && sleep 0' ok: [10.255.138.13] TASK [Send configuration commands from file to switch] ************************************* task path: /home/admin/Ansible/testing/nxos_config_test.yml:12 Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/network/nxos/nxos_config.py <10.255.138.13> ESTABLISH LOCAL CONNECTION FOR USER: admin <10.255.138.13> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173 `" && echo ansible-tmp-1478822436.45-187297994686173="` echo $HOME/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173 `" ) && sleep 0' <10.255.138.13> PUT /tmp/tmpsGsDIk TO /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py <10.255.138.13> EXEC /bin/sh -c 'chmod u+x /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/ /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py && sleep 0' <10.255.138.13> EXEC /bin/sh -c '/usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/nxos_config.py; rm -rf "/home/admin/.ansible/tmp/ansible-tmp-1478822436.45-187297994686173/" > /dev/null 2>&1 && sleep 0' fatal: [10.255.138.13]: FAILED! => { "changed": false, "clierror": "% Invalid command\n", "code": "400", "failed": true, "invocation": { "module_args": { "after": null, "auth_pass": null, "authorize": false, "backup": false, "before": null, "config": null, "defaults": false, "force": false, "host": "10.255.138.13", "lines": null, "match": "line", "parents": null, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "port": null, "provider": { "host": "10.255.138.13", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "transport": "nxapi", "username": "admin" }, "replace": "line", "save": false, "src": "hostname EVPN-SPINE1\n!\nfeature ospf\nfeature pim\nfeature lldp\nfeature bgp\nfeature nv overlay\nnv overlay evpn\n!\ninterface loopback0\n ip address 10.100.100.1/32\n ip router ospf 1 area 0.0.0.0\n ip pim sparse-mode\n!\nrouter ospf 1\n router-id 10.100.100.1\n area 0.0.0.0 authentication message-digest\n log-adjacency-changes\n auto-cost reference-bandwidth 1000 Gbps\n!\nip pim rp-address 10.100.100.254 group-list 224.0.0.0/4\nip pim ssm range 232.0.0.0/8\n!\nrouter bgp 65000\n router-id 10.100.100.1\n address-family ipv4 unicast\n address-family l2vpn evpn\n retain route-target all\n template peer vtep-peer\n remote-as 65000\n update-source loopback0\n address-family ipv4 unicast\n send-community both\n route-reflector-client\n address-family l2vpn evpn\n send-community both\n route-reflector-client\n\n", "ssh_keyfile": null, "timeout": 10, "transport": "nxapi", "use_ssl": false, "username": "admin", "validate_certs": true } }, "msg": "CLI execution error", "output": { "clierror": "% Invalid command\n", "code": "400", "msg": "CLI execution error" }, "url": "http://10.255.138.13:80/ins" } to retry, use: --limit @/home/admin/Ansible/testing/nxos_config_test.retry PLAY RECAP ********************************************************************************* 10.255.138.13 : ok=1 changed=0 unreachable=0 failed=1 ```
main
nxos command fails with cli error when using the src option issue type bug report component name nxos config ansible version ansible config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables using defaults os environment red hat enterprise linux server release maipo summary sending configuration to nexus fails when using the src option and nxapi as transport the same configuration works fine when using cli as the transport the command feature nxapi has already been turned on manually on the target device steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used cat nxos config test yml hosts evpn leaf vars nxapi host inventory hostname username admin password cisco transport nxapi tasks name send configuration commands from file to switch nxos config provider nxapi src txt register result contents of txt hostname evpn feature ospf feature pim feature lldp feature bgp feature nv overlay nv overlay evpn interface ip address ip router ospf area ip pim sparse mode router ospf router id area authentication message digest log adjacency changes auto cost reference bandwidth gbps ip pim rp address group list ip pim ssm range router bgp router id address family unicast address family evpn retain route target all template peer vtep peer remote as update source address family unicast send community both route reflector client address family evpn send community both route reflector client expected results configuration should have been applied to the target device actual results no changes are made to the target device a cli execution error is reported ansible playbook nxos config test yml vvvv using home admin ansible testing ansible cfg as config file loading callback plugin default of type stdout from usr lib site packages ansible plugins callback init pyc playbook nxos config test yml plays in nxos config test yml play task using module file usr lib site packages ansible modules core system setup py establish local connection for user admin exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home admin ansible tmp ansible tmp setup py exec bin sh c chmod u x home admin ansible tmp ansible tmp home admin ansible tmp ansible tmp setup py sleep exec bin sh c usr bin python home admin ansible tmp ansible tmp setup py rm rf home admin ansible tmp ansible tmp dev null sleep ok task task path home admin ansible testing nxos config test yml using module file usr lib site packages ansible modules core network nxos nxos config py establish local connection for user admin exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpsgsdik to home admin ansible tmp ansible tmp nxos config py exec bin sh c chmod u x home admin ansible tmp ansible tmp home admin ansible tmp ansible tmp nxos config py sleep exec bin sh c usr bin python home admin ansible tmp ansible tmp nxos config py rm rf home admin ansible tmp ansible tmp dev null sleep fatal failed changed false clierror invalid command n code failed true invocation module args after null auth pass null authorize false backup false before null config null defaults false force false host lines null match line parents null password value specified in no log parameter port null provider host password value specified in no log parameter transport nxapi username admin replace line save false src hostname evpn n nfeature ospf nfeature pim nfeature lldp nfeature bgp nfeature nv overlay nnv overlay evpn n ninterface n ip address n ip router ospf area n ip pim sparse mode n nrouter ospf n router id n area authentication message digest n log adjacency changes n auto cost reference bandwidth gbps n nip pim rp address group list nip pim ssm range n nrouter bgp n router id n address family unicast n address family evpn n retain route target all n template peer vtep peer n remote as n update source n address family unicast n send community both n route reflector client n address family evpn n send community both n route reflector client n n ssh keyfile null timeout transport nxapi use ssl false username admin validate certs true msg cli execution error output clierror invalid command n code msg cli execution error url to retry use limit home admin ansible testing nxos config test retry play recap ok changed unreachable failed
1
1,467
6,367,850,037
IssuesEvent
2017-08-01 07:40:37
daisy/pipeline-tasks
https://api.github.com/repos/daisy/pipeline-tasks
closed
Improve build speed
4 - Done maintainability XS
- [x] [ready] XProcSpec optimization ![XS][] [XS]: http://daisy.github.io/pipeline-tasks/XS.svg "Extra small (1 hour - 2 hours)" [S]: http://daisy.github.io/pipeline-tasks/S.svg "Small (2 hours - 1 day)" [M]: http://daisy.github.io/pipeline-tasks/M.svg "Medium (1 day - 2 days)" [L]: http://daisy.github.io/pipeline-tasks/L.svg "Large (2 days - 1 week)" [XL]: http://daisy.github.io/pipeline-tasks/XL.svg "Extra large (1 week - 2 weeks)" [XXL]: http://daisy.github.io/pipeline-tasks/XXL.svg "Extra extra large (2 weeks - 1 month)"
True
Improve build speed - - [x] [ready] XProcSpec optimization ![XS][] [XS]: http://daisy.github.io/pipeline-tasks/XS.svg "Extra small (1 hour - 2 hours)" [S]: http://daisy.github.io/pipeline-tasks/S.svg "Small (2 hours - 1 day)" [M]: http://daisy.github.io/pipeline-tasks/M.svg "Medium (1 day - 2 days)" [L]: http://daisy.github.io/pipeline-tasks/L.svg "Large (2 days - 1 week)" [XL]: http://daisy.github.io/pipeline-tasks/XL.svg "Extra large (1 week - 2 weeks)" [XXL]: http://daisy.github.io/pipeline-tasks/XXL.svg "Extra extra large (2 weeks - 1 month)"
main
improve build speed xprocspec optimization extra small hour hours small hours day medium day days large days week extra large week weeks extra extra large weeks month
1
25,759
4,440,865,261
IssuesEvent
2016-08-19 06:40:18
pcolby/bipolar
https://api.github.com/repos/pcolby/bipolar
closed
windows Bipolar-0.5.2.297.exe fails to install / start - MSVCP140.dll missing
defect
Hello, just tried to install version Bipolar-0.5.2.297.exe on Win7 64Bit, Polar Flow Sync 2.6.2 During the install provess a message appeared that the hook could not be installed. By trying to do that step manually "bipolar.exe -install-hook" am message box stated that "MSVCP140.dll" is missing MSVCP120.dll is present in the Bipolar path, but no MSVCP140.dll Regards Andreas
1.0
windows Bipolar-0.5.2.297.exe fails to install / start - MSVCP140.dll missing - Hello, just tried to install version Bipolar-0.5.2.297.exe on Win7 64Bit, Polar Flow Sync 2.6.2 During the install provess a message appeared that the hook could not be installed. By trying to do that step manually "bipolar.exe -install-hook" am message box stated that "MSVCP140.dll" is missing MSVCP120.dll is present in the Bipolar path, but no MSVCP140.dll Regards Andreas
non_main
windows bipolar exe fails to install start dll missing hello just tried to install version bipolar exe on polar flow sync during the install provess a message appeared that the hook could not be installed by trying to do that step manually bipolar exe install hook am message box stated that dll is missing dll is present in the bipolar path but no dll regards andreas
0
194,946
15,444,663,910
IssuesEvent
2021-03-08 10:42:14
BlackbirdHQ/ublox-short-range-rs
https://api.github.com/repos/BlackbirdHQ/ublox-short-range-rs
closed
Release to crates.io
deployment ๐Ÿคž documentation โœ๏ธ enhancement ๐Ÿ‘
There are a few chores to be fixed before we can look into releasing a 0.1.0 version of this to crates.io - [x] Readme - [x] Github Actions - [x] Repository description - [x] Repository topics - [x] Rid git dependencies - [x] Bump embedded-nal to release version - [x] Merge feature/extended-data into master
1.0
Release to crates.io - There are a few chores to be fixed before we can look into releasing a 0.1.0 version of this to crates.io - [x] Readme - [x] Github Actions - [x] Repository description - [x] Repository topics - [x] Rid git dependencies - [x] Bump embedded-nal to release version - [x] Merge feature/extended-data into master
non_main
release to crates io there are a few chores to be fixed before we can look into releasing a version of this to crates io readme github actions repository description repository topics rid git dependencies bump embedded nal to release version merge feature extended data into master
0
5,145
26,227,929,572
IssuesEvent
2023-01-04 20:36:28
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
Unable to load targets in CLion 2022.3
type: bug P1 product: CLion type: user support topic: sync awaiting-maintainer
#### Description of the issue. Please be specific. Unable to load any target correctly. Upon opening a synced target, I'm met with "This file does not belong to any project target; code insight features might not work properly", with a number of issues in C/C++ code resolution, highlighting, and others. All works fine when compiling the project. #### What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible. 1. Install fresh CLion, version 2022.3 2. Install Bazel plugin from Plugins menu 3. Open fresh project- I used the [example repo](https://github.com/bazelbuild/bazel/tree/master/examples/cpp-tutorial/stage1) as a sanity check. 4. Sync 5. Load any file- in the example I listed, it was hello-world.cc It seems the master branch does not support 2022.3 yet, or I would try that as well. #### Version information CLion: 2022.3 Platform: Linux 6.0.0-5-amd64 Bazel plugin: 2022.11.07.0.1-api-version-223 Bazel: 5.3.2
True
Unable to load targets in CLion 2022.3 - #### Description of the issue. Please be specific. Unable to load any target correctly. Upon opening a synced target, I'm met with "This file does not belong to any project target; code insight features might not work properly", with a number of issues in C/C++ code resolution, highlighting, and others. All works fine when compiling the project. #### What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible. 1. Install fresh CLion, version 2022.3 2. Install Bazel plugin from Plugins menu 3. Open fresh project- I used the [example repo](https://github.com/bazelbuild/bazel/tree/master/examples/cpp-tutorial/stage1) as a sanity check. 4. Sync 5. Load any file- in the example I listed, it was hello-world.cc It seems the master branch does not support 2022.3 yet, or I would try that as well. #### Version information CLion: 2022.3 Platform: Linux 6.0.0-5-amd64 Bazel plugin: 2022.11.07.0.1-api-version-223 Bazel: 5.3.2
main
unable to load targets in clion description of the issue please be specific unable to load any target correctly upon opening a synced target i m met with this file does not belong to any project target code insight features might not work properly with a number of issues in c c code resolution highlighting and others all works fine when compiling the project what s the simplest set of steps to reproduce this issue please provide an example project if possible install fresh clion version install bazel plugin from plugins menu open fresh project i used the as a sanity check sync load any file in the example i listed it was hello world cc it seems the master branch does not support yet or i would try that as well version information clion platform linux bazel plugin api version bazel
1
3,434
13,210,292,924
IssuesEvent
2020-08-15 16:09:34
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
Terraform Module Add "required_providers" parameter
affects_2.10 bot_closed cloud collection collection:community.general feature module needs_collection_redirect needs_maintainer needs_triage support:community
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> Terraform docs suggest users to define a provider version using the required_providers block see > https://www.terraform.io/docs/configuration/providers.html#provider-versions. But this is also a part of TF that they do not allow variable interpolation (just like the backend_config parameter you already include) ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> lib/ansible/modules/cloud/misc/terraform.py ##### ADDITIONAL INFORMATION <!--- Describe how the feature would be used, why it is needed and what it would solve --> I *believe* that this would be very small code change and essentially just reproduce what is being done in backend_config as that section goes inside of the same setup section of TF (example code): `terraform {` ` required_version = ">= 0.12.0"` ` required_providers {}` ` backend "azurerm" {// this block gets filled via ansible}` `}` it would just need to be a dictionary so inthe Ansible task it would look like: ``` - name: Run Terraform Apply terraform: project_path: 'terraform/' state: present force_init: true lock: true backend_config: access_key: "{{ tf_access_key }}" storage_account_name: "{{ tf_state_storage_account_name }}" container_name: "{{ container_name }}" key: "{{ file_name }}.tfstate" required_providers: azurerm: "{{ provider_version }}" aws: "{{ provider_version }}" register: terraform
True
Terraform Module Add "required_providers" parameter - <!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> Terraform docs suggest users to define a provider version using the required_providers block see > https://www.terraform.io/docs/configuration/providers.html#provider-versions. But this is also a part of TF that they do not allow variable interpolation (just like the backend_config parameter you already include) ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> lib/ansible/modules/cloud/misc/terraform.py ##### ADDITIONAL INFORMATION <!--- Describe how the feature would be used, why it is needed and what it would solve --> I *believe* that this would be very small code change and essentially just reproduce what is being done in backend_config as that section goes inside of the same setup section of TF (example code): `terraform {` ` required_version = ">= 0.12.0"` ` required_providers {}` ` backend "azurerm" {// this block gets filled via ansible}` `}` it would just need to be a dictionary so inthe Ansible task it would look like: ``` - name: Run Terraform Apply terraform: project_path: 'terraform/' state: present force_init: true lock: true backend_config: access_key: "{{ tf_access_key }}" storage_account_name: "{{ tf_state_storage_account_name }}" container_name: "{{ container_name }}" key: "{{ file_name }}.tfstate" required_providers: azurerm: "{{ provider_version }}" aws: "{{ provider_version }}" register: terraform
main
terraform module add required providers parameter summary terraform docs suggest users to define a provider version using the required providers block see but this is also a part of tf that they do not allow variable interpolation just like the backend config parameter you already include issue type feature idea component name lib ansible modules cloud misc terraform py additional information i believe that this would be very small code change and essentially just reproduce what is being done in backend config as that section goes inside of the same setup section of tf example code terraform required version required providers backend azurerm this block gets filled via ansible it would just need to be a dictionary so inthe ansible task it would look like name run terraform apply terraform project path terraform state present force init true lock true backend config access key tf access key storage account name tf state storage account name container name container name key file name tfstate required providers azurerm provider version aws provider version register terraform
1
5,741
30,348,878,398
IssuesEvent
2023-07-11 17:21:55
cncf/tag-contributor-strategy
https://api.github.com/repos/cncf/tag-contributor-strategy
closed
Involve the link to the Open Source Community
wg/maintainers-circle
Although the documentation is well written and very helpful for me to get involved in the open source field of information,but I do feel it would have been very good if it had given me some links or some project's community link to join.
True
Involve the link to the Open Source Community - Although the documentation is well written and very helpful for me to get involved in the open source field of information,but I do feel it would have been very good if it had given me some links or some project's community link to join.
main
involve the link to the open source community although the documentation is well written and very helpful for me to get involved in the open source field of information but i do feel it would have been very good if it had given me some links or some project s community link to join
1
1,525
6,572,215,841
IssuesEvent
2017-09-11 00:09:31
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Wrong parameter in example "deployment" should "be deployment_mode"
affects_2.0 docs_report easyfix waiting_on_maintainer
##### Issue Type: - Documentation Report ##### Plugin Name: bundler plugin ##### Ansible Version: ``` $ ansible --version ansible 2.0.1.0 config file = /data/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: n.a ##### Environment: n.a ##### Summary: Wrong parameter in example "deployment" should "be deployment_mode" ##### Steps To Reproduce: ``` ctrl+f "deployment" on http://docs.ansible.com/ansible/bundler_module.html ``` ##### Expected Results: deployment mode set successfully ##### Actual Results: When using deployment in script: ``` fatal: [172.28.128.15]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"chdir": "/opt/openproject", "deployment": "yes", "state": "present"}, "module_name": "bundler"}, "msg": "unsupported parameter for module: deployment"} ```
True
Wrong parameter in example "deployment" should "be deployment_mode" - ##### Issue Type: - Documentation Report ##### Plugin Name: bundler plugin ##### Ansible Version: ``` $ ansible --version ansible 2.0.1.0 config file = /data/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### Ansible Configuration: n.a ##### Environment: n.a ##### Summary: Wrong parameter in example "deployment" should "be deployment_mode" ##### Steps To Reproduce: ``` ctrl+f "deployment" on http://docs.ansible.com/ansible/bundler_module.html ``` ##### Expected Results: deployment mode set successfully ##### Actual Results: When using deployment in script: ``` fatal: [172.28.128.15]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"chdir": "/opt/openproject", "deployment": "yes", "state": "present"}, "module_name": "bundler"}, "msg": "unsupported parameter for module: deployment"} ```
main
wrong parameter in example deployment should be deployment mode issue type documentation report plugin name bundler plugin ansible version ansible version ansible config file data ansible ansible cfg configured module search path default w o overrides ansible configuration n a environment n a summary wrong parameter in example deployment should be deployment mode steps to reproduce ctrl f deployment on expected results deployment mode set successfully actual results when using deployment in script fatal failed changed false failed true invocation module args chdir opt openproject deployment yes state present module name bundler msg unsupported parameter for module deployment
1
643,977
20,961,708,107
IssuesEvent
2022-03-27 22:08:28
NerdyNomads/Text-Savvy
https://api.github.com/repos/NerdyNomads/Text-Savvy
closed
Create new endpoints to get user's workspaces and texts
high priority back-end
Create new endpoints: - In `persistence/accounts.js`: - get the list of workspaces for the specified user ID - In `persistence/workspaces.js`: - get the list of texts for the specified workspace ID
1.0
Create new endpoints to get user's workspaces and texts - Create new endpoints: - In `persistence/accounts.js`: - get the list of workspaces for the specified user ID - In `persistence/workspaces.js`: - get the list of texts for the specified workspace ID
non_main
create new endpoints to get user s workspaces and texts create new endpoints in persistence accounts js get the list of workspaces for the specified user id in persistence workspaces js get the list of texts for the specified workspace id
0
542,182
15,856,598,815
IssuesEvent
2021-04-08 02:43:50
mobigen/IRIS-BigData-Platform
https://api.github.com/repos/mobigen/IRIS-BigData-Platform
closed
Discovery / Druid๋ฅผ Data Source๋กœ ์—ฐ๋™ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด ์ฃผ์„ธ์š”.
#Discovery Priority: P1 Status: Waiting
Discovery / Druid๋ฅผ Data Source๋กœ ์—ฐ๋™ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด ์ฃผ์„ธ์š”. by ๋‚˜์ƒํฌ
1.0
Discovery / Druid๋ฅผ Data Source๋กœ ์—ฐ๋™ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด ์ฃผ์„ธ์š”. - Discovery / Druid๋ฅผ Data Source๋กœ ์—ฐ๋™ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด ์ฃผ์„ธ์š”. by ๋‚˜์ƒํฌ
non_main
discovery druid๋ฅผ data source๋กœ ์—ฐ๋™ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด ์ฃผ์„ธ์š” discovery druid๋ฅผ data source๋กœ ์—ฐ๋™ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด ์ฃผ์„ธ์š” by ๋‚˜์ƒํฌ
0
632
4,148,463,750
IssuesEvent
2016-06-15 11:06:18
Particular/NServiceBus.CastleWindsor
https://api.github.com/repos/Particular/NServiceBus.CastleWindsor
closed
Using an existing container could fail with TypedFactoryFacility
Impact: S Project: V6 Launch Size: S State: In Progress - Maintainer Prio Tag: Maintainer Prio Type: Bug
## Repro: 1. Add NServiceBus.Newtonsoft.Json and configure the endpoint to use it 2. Create a new instance of `WindsorContainer` and configure it to use `TypedFactoryFacility` 3. Send a message ``` var container = new WindsorContainer(); container.AddFacility<TypedFactoryFacility>(); BusConfiguration busConfiguration = new BusConfiguration(); busConfiguration.UseSerialization<NewtonsoftSerializer>(); busConfiguration.UseContainer<WindsorBuilder>(b => b.ExistingContainer(container)); busConfiguration.UsePersistence<InMemoryPersistence>(); busConfiguration.EnableInstallers(); using (IBus bus = Bus.Create(busConfiguration).Start()) { bus.SendLocal(new CreateOrder {}); } ``` ## Behavior: Application throws an exception that `Newtonsoft.Json.JsonWriter` is not registered in the container. ## Cause The cause is the `TypedFactoryFacility` which uses implicit registration to replace all `Func<T>`'s with a `CastleProxy` that wraps the `Func<T>`, causing the code to try and resolve the types via the container instead of using the code provided in the `Func<T>`. The implicit registration looks for any publicly accessible `Func<T>`'s and wraps them. ## Potential fixes: We could ask our users to not use the `TypedFactoryFacility` Another approach is to change from using a `Func<Stream, JsonReader>`/`Func<String, JsonWriter>` to using an interface that is equivalent to the `Func<>`s. @Particular/container-maintainers as per the triage process I need another set of eyes on this to confirm the scheduling
True
Using an existing container could fail with TypedFactoryFacility - ## Repro: 1. Add NServiceBus.Newtonsoft.Json and configure the endpoint to use it 2. Create a new instance of `WindsorContainer` and configure it to use `TypedFactoryFacility` 3. Send a message ``` var container = new WindsorContainer(); container.AddFacility<TypedFactoryFacility>(); BusConfiguration busConfiguration = new BusConfiguration(); busConfiguration.UseSerialization<NewtonsoftSerializer>(); busConfiguration.UseContainer<WindsorBuilder>(b => b.ExistingContainer(container)); busConfiguration.UsePersistence<InMemoryPersistence>(); busConfiguration.EnableInstallers(); using (IBus bus = Bus.Create(busConfiguration).Start()) { bus.SendLocal(new CreateOrder {}); } ``` ## Behavior: Application throws an exception that `Newtonsoft.Json.JsonWriter` is not registered in the container. ## Cause The cause is the `TypedFactoryFacility` which uses implicit registration to replace all `Func<T>`'s with a `CastleProxy` that wraps the `Func<T>`, causing the code to try and resolve the types via the container instead of using the code provided in the `Func<T>`. The implicit registration looks for any publicly accessible `Func<T>`'s and wraps them. ## Potential fixes: We could ask our users to not use the `TypedFactoryFacility` Another approach is to change from using a `Func<Stream, JsonReader>`/`Func<String, JsonWriter>` to using an interface that is equivalent to the `Func<>`s. @Particular/container-maintainers as per the triage process I need another set of eyes on this to confirm the scheduling
main
using an existing container could fail with typedfactoryfacility repro add nservicebus newtonsoft json and configure the endpoint to use it create a new instance of windsorcontainer and configure it to use typedfactoryfacility send a message var container new windsorcontainer container addfacility busconfiguration busconfiguration new busconfiguration busconfiguration useserialization busconfiguration usecontainer b b existingcontainer container busconfiguration usepersistence busconfiguration enableinstallers using ibus bus bus create busconfiguration start bus sendlocal new createorder behavior application throws an exception that newtonsoft json jsonwriter is not registered in the container cause the cause is the typedfactoryfacility which uses implicit registration to replace all func s with a castleproxy that wraps the func causing the code to try and resolve the types via the container instead of using the code provided in the func the implicit registration looks for any publicly accessible func s and wraps them potential fixes we could ask our users to not use the typedfactoryfacility another approach is to change from using a func func to using an interface that is equivalent to the func s particular container maintainers as per the triage process i need another set of eyes on this to confirm the scheduling
1
1,507
6,520,099,669
IssuesEvent
2017-08-28 15:14:23
duckduckgo/zeroclickinfo-goodies
https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies
closed
JS Minifier - add option to beautify JS
Improvement Maintainer Input Requested Programming Mission Skill: JavaScript Status: Work In Progress Topic: JavaScript
## Background The library used for JS Minifier IA, [prettydiff](https://github.com/prettydiff/prettydiff) can perform JS beautifying too; it would be a nice feature to heave. ## Forum See the topic [Improve JavaScript Minifier Goodie](https://forum.duckduckhack.com/t/improve-javascript-minifier-goodie/175) This issue is part of the [Programming Mission](https://forum.duckduckhack.com/t/duckduckhack-programming-mission-overview/53): help us improve the results for [JavaScript related searches](https://forum.duckduckhack.com/t/javascript-search-overview/94)! --- IA Page: https://duck.co/ia/view/js_minify Maintainer: @sahildua2305
True
JS Minifier - add option to beautify JS - ## Background The library used for JS Minifier IA, [prettydiff](https://github.com/prettydiff/prettydiff) can perform JS beautifying too; it would be a nice feature to heave. ## Forum See the topic [Improve JavaScript Minifier Goodie](https://forum.duckduckhack.com/t/improve-javascript-minifier-goodie/175) This issue is part of the [Programming Mission](https://forum.duckduckhack.com/t/duckduckhack-programming-mission-overview/53): help us improve the results for [JavaScript related searches](https://forum.duckduckhack.com/t/javascript-search-overview/94)! --- IA Page: https://duck.co/ia/view/js_minify Maintainer: @sahildua2305
main
js minifier add option to beautify js background the library used for js minifier ia can perform js beautifying too it would be a nice feature to heave forum see the topic this issue is part of the help us improve the results for ia page maintainer
1
1,783
6,575,840,517
IssuesEvent
2017-09-11 17:31:56
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Error with ec2_group module - Invalid rule parameter '-'
affects_2.1 aws bug_report cloud waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.1.2.0 config file = /Users/user/git/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ``` [defaults] retry_files_enabled = False gathering = smart nocows = 1 roles_path = /etc/ansible/roles [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say โ€œN/Aโ€ for anything that is not platform-specific. --> `macOS Sierraโ€Ž version 10.12` ##### SUMMARY Trying to generate rules using Jinja2 throws an error about invalid parameter. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> createSG.yml ``` --- - hosts: localhost connection: local gather_facts: no vars: aws_profile_name: local_aws_profile aws_vpc_id: vpc-abcd1234 ip_whitelist: - 8.8.8.8/32 - 8.8.4.4/32 tasks: - name: Create security group with IP blocks ec2_group: profile: "{{ aws_profile_name }}" region: us-east-1 description: "Whitelist" name: sg-whitelist purge_rules: true rules: | {% for host in ip_whitelist %} - proto: tcp from_port: 443 to_port: 443 cidr_ip: {{ host }} {% endfor %} vpc_id: "{{ aws_vpc_id }}" state: present ``` ``` ansible-playbook playbooks/createSG.yml ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> A security group created with two ingress rules. Any rules not part of the play to be purged. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` PLAY [localhost] *************************************************************** TASK [Create security group with IP blocks] ******************* fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Invalid rule parameter '-'"} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ```
True
Error with ec2_group module - Invalid rule parameter '-' - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME ec2_group ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.1.2.0 config file = /Users/user/git/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ``` [defaults] retry_files_enabled = False gathering = smart nocows = 1 roles_path = /etc/ansible/roles [privilege_escalation] [paramiko_connection] [ssh_connection] [accelerate] [selinux] [colors] ``` ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say โ€œN/Aโ€ for anything that is not platform-specific. --> `macOS Sierraโ€Ž version 10.12` ##### SUMMARY Trying to generate rules using Jinja2 throws an error about invalid parameter. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> createSG.yml ``` --- - hosts: localhost connection: local gather_facts: no vars: aws_profile_name: local_aws_profile aws_vpc_id: vpc-abcd1234 ip_whitelist: - 8.8.8.8/32 - 8.8.4.4/32 tasks: - name: Create security group with IP blocks ec2_group: profile: "{{ aws_profile_name }}" region: us-east-1 description: "Whitelist" name: sg-whitelist purge_rules: true rules: | {% for host in ip_whitelist %} - proto: tcp from_port: 443 to_port: 443 cidr_ip: {{ host }} {% endfor %} vpc_id: "{{ aws_vpc_id }}" state: present ``` ``` ansible-playbook playbooks/createSG.yml ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> A security group created with two ingress rules. Any rules not part of the play to be purged. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` PLAY [localhost] *************************************************************** TASK [Create security group with IP blocks] ******************* fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Invalid rule parameter '-'"} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 ```
main
error with group module invalid rule parameter issue type bug report component name group ansible version ansible config file users user git ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables retry files enabled false gathering smart nocows roles path etc ansible roles os environment mention the os you are running ansible from and the os you are managing or say โ€œn aโ€ for anything that is not platform specific macos sierraโ€Ž version summary trying to generate rules using throws an error about invalid parameter steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used createsg yml hosts localhost connection local gather facts no vars aws profile name local aws profile aws vpc id vpc ip whitelist tasks name create security group with ip blocks group profile aws profile name region us east description whitelist name sg whitelist purge rules true rules for host in ip whitelist proto tcp from port to port cidr ip host endfor vpc id aws vpc id state present ansible playbook playbooks createsg yml expected results a security group created with two ingress rules any rules not part of the play to be purged actual results play task fatal failed changed false failed true msg invalid rule parameter no more hosts left play recap localhost ok changed unreachable failed
1
269,686
23,459,138,019
IssuesEvent
2022-08-16 11:37:53
wazuh/wazuh
https://api.github.com/repos/wazuh/wazuh
opened
Release 4.3.7 - Release Candidate 1 - Demo use cases
team/cicd release test
### Demo use cases information | | | |---------------------------------|--------------------------------------------| | **Main release candidate issue** | #14562 | | **Version** | 4.3.7 | | **Release candidate #** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/v4.3.7-rc1 | | **Previous Demo use cases** | -- | ## Checks Status | Result | Use case | Issues :--: | :--: | -- | -- | โšซ | โšซ | Audit | โšซ | โšซ | AWS Wodle | โšซ | โšซ | Brute force | โšซ | โšซ | Docker | โšซ | โšซ | Emotet | โšซ | โšซ | FIM | โšซ | โšซ | IP Reputation | โšซ | โšซ | Netcat | โšซ | โšซ | Osquery | โšซ | โšซ | Shellshock | โšซ | โšซ | SQL Injection | โšซ | โšซ | Slack | โšซ | โšซ | Suricata | โšซ | โšซ | Trojan | โšซ | โšซ | Virustotal | โšซ | โšซ | Vulnerability Detector | โšซ | โšซ | Yara | โšซ | โšซ | Windows Defender | Result legend: โšซ - Not started ๐Ÿ• - Pending/In progress โœ”๏ธ - Results Ready โš ๏ธ - Review required Status legend: โšซ - None ๐Ÿ”ด - Rejected ๐ŸŸข - Approved ## Auditors validation In order to close and proceed with release or the next candidate version, the following auditors must give the green light to this RC. - [ ] @alberpilot - [ ] @teddytpc1
1.0
Release 4.3.7 - Release Candidate 1 - Demo use cases - ### Demo use cases information | | | |---------------------------------|--------------------------------------------| | **Main release candidate issue** | #14562 | | **Version** | 4.3.7 | | **Release candidate #** | RC1 | | **Tag** | https://github.com/wazuh/wazuh/tree/v4.3.7-rc1 | | **Previous Demo use cases** | -- | ## Checks Status | Result | Use case | Issues :--: | :--: | -- | -- | โšซ | โšซ | Audit | โšซ | โšซ | AWS Wodle | โšซ | โšซ | Brute force | โšซ | โšซ | Docker | โšซ | โšซ | Emotet | โšซ | โšซ | FIM | โšซ | โšซ | IP Reputation | โšซ | โšซ | Netcat | โšซ | โšซ | Osquery | โšซ | โšซ | Shellshock | โšซ | โšซ | SQL Injection | โšซ | โšซ | Slack | โšซ | โšซ | Suricata | โšซ | โšซ | Trojan | โšซ | โšซ | Virustotal | โšซ | โšซ | Vulnerability Detector | โšซ | โšซ | Yara | โšซ | โšซ | Windows Defender | Result legend: โšซ - Not started ๐Ÿ• - Pending/In progress โœ”๏ธ - Results Ready โš ๏ธ - Review required Status legend: โšซ - None ๐Ÿ”ด - Rejected ๐ŸŸข - Approved ## Auditors validation In order to close and proceed with release or the next candidate version, the following auditors must give the green light to this RC. - [ ] @alberpilot - [ ] @teddytpc1
non_main
release release candidate demo use cases demo use cases information main release candidate issue version release candidate tag previous demo use cases checks status result use case issues โšซ โšซ audit โšซ โšซ aws wodle โšซ โšซ brute force โšซ โšซ docker โšซ โšซ emotet โšซ โšซ fim โšซ โšซ ip reputation โšซ โšซ netcat โšซ โšซ osquery โšซ โšซ shellshock โšซ โšซ sql injection โšซ โšซ slack โšซ โšซ suricata โšซ โšซ trojan โšซ โšซ virustotal โšซ โšซ vulnerability detector โšซ โšซ yara โšซ โšซ windows defender result legend โšซ not started ๐Ÿ• pending in progress โœ”๏ธ results ready โš ๏ธ review required status legend โšซ none ๐Ÿ”ด rejected ๐ŸŸข approved auditors validation in order to close and proceed with release or the next candidate version the following auditors must give the green light to this rc alberpilot
0
1,415
6,155,316,839
IssuesEvent
2017-06-28 14:32:42
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
docker module always reload container if net parameter ommited
affects_1.9 bug_report cloud docker waiting_on_maintainer
##### Issue Type: Bug Report ##### Component Name: _docker module ##### Ansible Version: 1.9.1 ##### Ansible Configuration: [defaults] hash_behaviour=merge host_key_checking=false gathering=no pipelining=yes remote_user=centos become=yes sudo=yes hostfile=hosts forks=30 ##### Environment: CentOS 7 ##### Summary: docker module always reload container if net parameter ommited ##### Steps To Reproduce: [centos@dpt-bb0 ~]$ ansible localhost -c local -s -m docker -a "image=mongo state=reloaded" localhost | success >> { "ansible_facts": { "docker_containers": [ { "Id": "8a125c823d5de6d7fe8d4338f5b68c2324c30ad7a1bbe85023b30f0af242bb3f", "Warnings": null } ] }, "changed": true, "containers": [ { "Id": "8a125c823d5de6d7fe8d4338f5b68c2324c30ad7a1bbe85023b30f0af242bb3f", "Warnings": null } ], "msg": "started 1 container, created 1 container, pulled 1 container.", "reload_reasons": null, "summary": { "created": 1, "killed": 0, "pulled": 1, "removed": 0, "restarted": 0, "started": 1, "stopped": 0 } } [centos@dpt-bb0 ~]$ ansible localhost -c local -s -m docker -a "image=mongo state=reloaded" localhost | success >> { "ansible_facts": { "docker_containers": [ { "Id": "5130248d943ae9485e6287caa601d10a7bb93477df543f5e8b9db145db25da39", "Warnings": null } ] }, "changed": true, "containers": [ { "Id": "5130248d943ae9485e6287caa601d10a7bb93477df543f5e8b9db145db25da39", "Warnings": null } ], "msg": "removed 1 container, stopped 1 container, started 1 container, created 1 container.", "reload_reasons": "net (bridge => )", "summary": { "created": 1, "killed": 0, "pulled": 0, "removed": 1, "restarted": 0, "started": 1, "stopped": 1 } } ##### Expected Results: Second run should not restart container ##### Actual Results: Container always restarted
True
docker module always reload container if net parameter ommited - ##### Issue Type: Bug Report ##### Component Name: _docker module ##### Ansible Version: 1.9.1 ##### Ansible Configuration: [defaults] hash_behaviour=merge host_key_checking=false gathering=no pipelining=yes remote_user=centos become=yes sudo=yes hostfile=hosts forks=30 ##### Environment: CentOS 7 ##### Summary: docker module always reload container if net parameter ommited ##### Steps To Reproduce: [centos@dpt-bb0 ~]$ ansible localhost -c local -s -m docker -a "image=mongo state=reloaded" localhost | success >> { "ansible_facts": { "docker_containers": [ { "Id": "8a125c823d5de6d7fe8d4338f5b68c2324c30ad7a1bbe85023b30f0af242bb3f", "Warnings": null } ] }, "changed": true, "containers": [ { "Id": "8a125c823d5de6d7fe8d4338f5b68c2324c30ad7a1bbe85023b30f0af242bb3f", "Warnings": null } ], "msg": "started 1 container, created 1 container, pulled 1 container.", "reload_reasons": null, "summary": { "created": 1, "killed": 0, "pulled": 1, "removed": 0, "restarted": 0, "started": 1, "stopped": 0 } } [centos@dpt-bb0 ~]$ ansible localhost -c local -s -m docker -a "image=mongo state=reloaded" localhost | success >> { "ansible_facts": { "docker_containers": [ { "Id": "5130248d943ae9485e6287caa601d10a7bb93477df543f5e8b9db145db25da39", "Warnings": null } ] }, "changed": true, "containers": [ { "Id": "5130248d943ae9485e6287caa601d10a7bb93477df543f5e8b9db145db25da39", "Warnings": null } ], "msg": "removed 1 container, stopped 1 container, started 1 container, created 1 container.", "reload_reasons": "net (bridge => )", "summary": { "created": 1, "killed": 0, "pulled": 0, "removed": 1, "restarted": 0, "started": 1, "stopped": 1 } } ##### Expected Results: Second run should not restart container ##### Actual Results: Container always restarted
main
docker module always reload container if net parameter ommited issue type bug report component name docker module ansible version ansible configuration hash behaviour merge host key checking false gathering no pipelining yes remote user centos become yes sudo yes hostfile hosts forks environment centos summary docker module always reload container if net parameter ommited steps to reproduce ansible localhost c local s m docker a image mongo state reloaded localhost success ansible facts docker containers id warnings null changed true containers id warnings null msg started container created container pulled container reload reasons null summary created killed pulled removed restarted started stopped ansible localhost c local s m docker a image mongo state reloaded localhost success ansible facts docker containers id warnings null changed true containers id warnings null msg removed container stopped container started container created container reload reasons net bridge summary created killed pulled removed restarted started stopped expected results second run should not restart container actual results container always restarted
1
3,243
12,368,706,966
IssuesEvent
2020-05-18 14:13:32
Kashdeya/Tiny-Progressions
https://api.github.com/repos/Kashdeya/Tiny-Progressions
closed
Suggestion: Lamps Texture
Version not Maintainted
I love the lamps with the glass and torch, however it would be nice if the torch did not render. When building walls or ceilings out of it looks ugly. Could there be a config option to turn off the rendering of the torch?
True
Suggestion: Lamps Texture - I love the lamps with the glass and torch, however it would be nice if the torch did not render. When building walls or ceilings out of it looks ugly. Could there be a config option to turn off the rendering of the torch?
main
suggestion lamps texture i love the lamps with the glass and torch however it would be nice if the torch did not render when building walls or ceilings out of it looks ugly could there be a config option to turn off the rendering of the torch
1
154,713
13,565,103,725
IssuesEvent
2020-09-18 11:08:19
tbouffard/playground-release-drafter-and-gh-pages
https://api.github.com/repos/tbouffard/playground-release-drafter-and-gh-pages
closed
gh-pages: setup a tbouffard user site
documentation
This is to confirm that https://tbouffard.github.io/playground-release-drafter-and-gh-pages/ will still be available like today
1.0
gh-pages: setup a tbouffard user site - This is to confirm that https://tbouffard.github.io/playground-release-drafter-and-gh-pages/ will still be available like today
non_main
gh pages setup a tbouffard user site this is to confirm that will still be available like today
0
2,594
8,820,333,066
IssuesEvent
2019-01-01 11:09:01
dzavalishin/mqtt_udp
https://api.github.com/repos/dzavalishin/mqtt_udp
opened
Add realistic speed mode to seq_storm_send.py
Maintain good first issue help wanted
Add command line flags and some sleep between sending packets to have 100 and 1000 packets/sec traffic generation modes.
True
Add realistic speed mode to seq_storm_send.py - Add command line flags and some sleep between sending packets to have 100 and 1000 packets/sec traffic generation modes.
main
add realistic speed mode to seq storm send py add command line flags and some sleep between sending packets to have and packets sec traffic generation modes
1
658,042
21,876,435,907
IssuesEvent
2022-05-19 10:34:26
bounswe/bounswe2022group7
https://api.github.com/repos/bounswe/bounswe2022group7
closed
[Practice App] Art Item Related Features
Status: Pending Review Priority: High Difficulty: Hard Type: Implementation
In [Meeting #12](https://github.com/bounswe/bounswe2022group7/wiki/Meeting-Notes-%2312), we assigned features to team members. I was assigned to handle the art item related features. # Requirements - Users shall be able to view an art item in the platform - Users shall be able to view dominant colors in an art item - Artists shall be able to create an art item in the platform - The art item shall include a name, a description, and an image # Use Case Diagram <img width="728" alt="art_item_use_case" src="https://user-images.githubusercontent.com/56476673/168831181-62dca0f3-cbb5-4751-ae2c-9741da922b56.png"> # Class Diagram <img width="1032" alt="art_item_class" src="https://user-images.githubusercontent.com/56476673/168836003-99fb9341-77ff-4737-9085-53c93328afa4.png"> # Sequence Diagrams ## View Art Item ![art_item_seq_view](https://user-images.githubusercontent.com/56476673/168857026-68660908-f550-46c3-b1ba-498ef90d3a30.png) ## Create Art Item ![art_item_seq_create](https://user-images.githubusercontent.com/56476673/168857199-f3bb1a47-2013-4ff2-b2e8-e1c8baab1f40.png) # Feature Implementations - [x] **Create a new branch based on the [practice_app](https://github.com/bounswe/bounswe2022group7/tree/practice_app) branch** According to the naming convention that we discussed, the name of the branch shall be `practice-app/feature/art-items`. - [x] Create an endpoint for creating art items - [x] Create an endpoint for viewing a particular art item - [x] Connect the frontend templates to the api endpoints - [x] Update the sidebar in the [base.html](https://github.com/bounswe/bounswe2022group7/blob/practice_app/practice-app/website/templates/base.html) by adding a link to the art item creation page - [x] Create a PR and add @AlicanM as a reviewer. **Deadline: 18/05/2022 23:59** # Testing & Documentation Once all the work is completed, I need to add: - [x] #183 - [x] #200 - [x] Create .yml files for Swagger API documentation (See #192 for tracking) These subtasks mentioned above shall be converted to separate issues with their deadlines included. # Conclusion Once all the tasks are completed, a PR will be created and @AlicanM will be added as a reviewer.
1.0
[Practice App] Art Item Related Features - In [Meeting #12](https://github.com/bounswe/bounswe2022group7/wiki/Meeting-Notes-%2312), we assigned features to team members. I was assigned to handle the art item related features. # Requirements - Users shall be able to view an art item in the platform - Users shall be able to view dominant colors in an art item - Artists shall be able to create an art item in the platform - The art item shall include a name, a description, and an image # Use Case Diagram <img width="728" alt="art_item_use_case" src="https://user-images.githubusercontent.com/56476673/168831181-62dca0f3-cbb5-4751-ae2c-9741da922b56.png"> # Class Diagram <img width="1032" alt="art_item_class" src="https://user-images.githubusercontent.com/56476673/168836003-99fb9341-77ff-4737-9085-53c93328afa4.png"> # Sequence Diagrams ## View Art Item ![art_item_seq_view](https://user-images.githubusercontent.com/56476673/168857026-68660908-f550-46c3-b1ba-498ef90d3a30.png) ## Create Art Item ![art_item_seq_create](https://user-images.githubusercontent.com/56476673/168857199-f3bb1a47-2013-4ff2-b2e8-e1c8baab1f40.png) # Feature Implementations - [x] **Create a new branch based on the [practice_app](https://github.com/bounswe/bounswe2022group7/tree/practice_app) branch** According to the naming convention that we discussed, the name of the branch shall be `practice-app/feature/art-items`. - [x] Create an endpoint for creating art items - [x] Create an endpoint for viewing a particular art item - [x] Connect the frontend templates to the api endpoints - [x] Update the sidebar in the [base.html](https://github.com/bounswe/bounswe2022group7/blob/practice_app/practice-app/website/templates/base.html) by adding a link to the art item creation page - [x] Create a PR and add @AlicanM as a reviewer. **Deadline: 18/05/2022 23:59** # Testing & Documentation Once all the work is completed, I need to add: - [x] #183 - [x] #200 - [x] Create .yml files for Swagger API documentation (See #192 for tracking) These subtasks mentioned above shall be converted to separate issues with their deadlines included. # Conclusion Once all the tasks are completed, a PR will be created and @AlicanM will be added as a reviewer.
non_main
art item related features in we assigned features to team members i was assigned to handle the art item related features requirements users shall be able to view an art item in the platform users shall be able to view dominant colors in an art item artists shall be able to create an art item in the platform the art item shall include a name a description and an image use case diagram img width alt art item use case src class diagram img width alt art item class src sequence diagrams view art item create art item feature implementations create a new branch based on the branch according to the naming convention that we discussed the name of the branch shall be practice app feature art items create an endpoint for creating art items create an endpoint for viewing a particular art item connect the frontend templates to the api endpoints update the sidebar in the by adding a link to the art item creation page create a pr and add alicanm as a reviewer deadline testing documentation once all the work is completed i need to add create yml files for swagger api documentation see for tracking these subtasks mentioned above shall be converted to separate issues with their deadlines included conclusion once all the tasks are completed a pr will be created and alicanm will be added as a reviewer
0
1,741
6,574,889,313
IssuesEvent
2017-09-11 14:24:17
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Check if service is installed just with conditionals
affects_2.2 feature_idea waiting_on_maintainer
##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Feature Idea ##### COMPONENT NAME service ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.2.0.0 (detached HEAD bce9bfce51) last updated 2016/10/20 17:00:41 (GMT +000) ``` ##### SUMMARY When I want to ensure, that a certain program is NOT running, I use the `service` module and set the parameter `state: stopped`. This works perfectly fine, when the service is really installed. But in many cases, the service isn't installed and than this module just fails. This is really annoying, because a task must be added before the service task, which checks somehow if the service is even installed. It would be very helpful, to somehow extend the service-module to stop a service only if it is available. In my point of view the most generic solution would be to add some Utility usable in the when block: ``` - service name: topbeat state: stopped when: services.state.topbeat is present ```
True
Check if service is installed just with conditionals - ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Feature Idea ##### COMPONENT NAME service ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.2.0.0 (detached HEAD bce9bfce51) last updated 2016/10/20 17:00:41 (GMT +000) ``` ##### SUMMARY When I want to ensure, that a certain program is NOT running, I use the `service` module and set the parameter `state: stopped`. This works perfectly fine, when the service is really installed. But in many cases, the service isn't installed and than this module just fails. This is really annoying, because a task must be added before the service task, which checks somehow if the service is even installed. It would be very helpful, to somehow extend the service-module to stop a service only if it is available. In my point of view the most generic solution would be to add some Utility usable in the when block: ``` - service name: topbeat state: stopped when: services.state.topbeat is present ```
main
check if service is installed just with conditionals issue type feature idea component name service ansible version ansible detached head last updated gmt summary when i want to ensure that a certain program is not running i use the service module and set the parameter state stopped this works perfectly fine when the service is really installed but in many cases the service isn t installed and than this module just fails this is really annoying because a task must be added before the service task which checks somehow if the service is even installed it would be very helpful to somehow extend the service module to stop a service only if it is available in my point of view the most generic solution would be to add some utility usable in the when block service name topbeat state stopped when services state topbeat is present
1
1,015
4,794,398,005
IssuesEvent
2016-10-31 20:55:05
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Synchronize module shouldn't use ssh if src and dest are on the same machine.
affects_2.1 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> synchronize ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.1.1.0 config file = /home/user/.ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say โ€œN/Aโ€ for anything that is not platform-specific. --> Fedora 24 ##### SUMMARY <!--- Explain the problem briefly --> When using synchronize module to sync two files on a single remote host, it shouldn't involve SSH, otherwise it may hang indefinitely. In my case, I didn't set up ssh-agent forward and I don't want to (because I heard there are security risk), so my remote host can't SSH into itself, which causes it hang indefinitely. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` - name: Check if the file /etc/redhat-release changed. synchronize: src=/etc/redhat-release dest=/root/.ansible_redhat-release_of_last_reboot checksum=yes copy_links=yes delegate_to: "{{ inventory_hostname }}" register: redhat_release ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> It should just work without hang. It shouldn't involve any SSH. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> It hangs indefinitely and I have to Ctrl-C to kill it. <!--- Paste verbatim command output between quotes below --> ``` ```
True
Synchronize module shouldn't use ssh if src and dest are on the same machine. - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> synchronize ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.1.1.0 config file = /home/user/.ansible.cfg configured module search path = ['/usr/share/ansible'] ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say โ€œN/Aโ€ for anything that is not platform-specific. --> Fedora 24 ##### SUMMARY <!--- Explain the problem briefly --> When using synchronize module to sync two files on a single remote host, it shouldn't involve SSH, otherwise it may hang indefinitely. In my case, I didn't set up ssh-agent forward and I don't want to (because I heard there are security risk), so my remote host can't SSH into itself, which causes it hang indefinitely. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` - name: Check if the file /etc/redhat-release changed. synchronize: src=/etc/redhat-release dest=/root/.ansible_redhat-release_of_last_reboot checksum=yes copy_links=yes delegate_to: "{{ inventory_hostname }}" register: redhat_release ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> It should just work without hang. It shouldn't involve any SSH. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> It hangs indefinitely and I have to Ctrl-C to kill it. <!--- Paste verbatim command output between quotes below --> ``` ```
main
synchronize module shouldn t use ssh if src and dest are on the same machine issue type bug report component name synchronize ansible version ansible config file home user ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say โ€œn aโ€ for anything that is not platform specific fedora summary when using synchronize module to sync two files on a single remote host it shouldn t involve ssh otherwise it may hang indefinitely in my case i didn t set up ssh agent forward and i don t want to because i heard there are security risk so my remote host can t ssh into itself which causes it hang indefinitely steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name check if the file etc redhat release changed synchronize src etc redhat release dest root ansible redhat release of last reboot checksum yes copy links yes delegate to inventory hostname register redhat release expected results it should just work without hang it shouldn t involve any ssh actual results it hangs indefinitely and i have to ctrl c to kill it
1
71,518
13,671,350,239
IssuesEvent
2020-09-29 06:50:11
home-assistant/brands
https://api.github.com/repos/home-assistant/brands
closed
NZBGet is missing brand images
domain-missing has-codeowner
## The problem The NZBGet integration does not have brand images in this repository. We recently started this Brands repository, to create a centralized storage of all brand-related images. These images are used on our website and the Home Assistant frontend. The following images are missing and would ideally be added: - `src/nzbget/icon.png` - `src/nzbget/logo.png` - `src/nzbget/icon@2x.png` - `src/nzbget/logo@2x.png` For image specifications and requirements, please see [README.md](https://github.com/home-assistant/brands/blob/master/README.md). ## Updating the documentation repository Our documentation repository already has a logo for this integration, however, it does not meet the image requirements of this new Brands repository. If adding images to this repository, please open up a PR to the documentation repository as well, removing the `logo: nzbget.png` line from this file: <https://github.com/home-assistant/home-assistant.io/blob/current/source/_integrations/nzbget.markdown> **Note**: The documentation PR needs to be opened against the `current` branch. **Note2**: Please leave the actual logo file in the documentation repository. It will be cleaned up differently. ## Additional information For more information about this repository, read the [README.md](https://github.com/home-assistant/brands/blob/master/README.md) file of this repository. It contains information on how this repository works, and image specification and requirements. ## Codeowner mention Hi there, @chriscla! Mind taking a look at this issue as it is with an integration (nzbget) you are listed as a [codeowner](https://github.com/home-assistant/core/blob/dev/homeassistant/components/nzbget/manifest.json) for? Thanks! Resolving this issue is not limited to codeowners! If you want to help us out, feel free to resolve this issue! Thanks already!
1.0
NZBGet is missing brand images - ## The problem The NZBGet integration does not have brand images in this repository. We recently started this Brands repository, to create a centralized storage of all brand-related images. These images are used on our website and the Home Assistant frontend. The following images are missing and would ideally be added: - `src/nzbget/icon.png` - `src/nzbget/logo.png` - `src/nzbget/icon@2x.png` - `src/nzbget/logo@2x.png` For image specifications and requirements, please see [README.md](https://github.com/home-assistant/brands/blob/master/README.md). ## Updating the documentation repository Our documentation repository already has a logo for this integration, however, it does not meet the image requirements of this new Brands repository. If adding images to this repository, please open up a PR to the documentation repository as well, removing the `logo: nzbget.png` line from this file: <https://github.com/home-assistant/home-assistant.io/blob/current/source/_integrations/nzbget.markdown> **Note**: The documentation PR needs to be opened against the `current` branch. **Note2**: Please leave the actual logo file in the documentation repository. It will be cleaned up differently. ## Additional information For more information about this repository, read the [README.md](https://github.com/home-assistant/brands/blob/master/README.md) file of this repository. It contains information on how this repository works, and image specification and requirements. ## Codeowner mention Hi there, @chriscla! Mind taking a look at this issue as it is with an integration (nzbget) you are listed as a [codeowner](https://github.com/home-assistant/core/blob/dev/homeassistant/components/nzbget/manifest.json) for? Thanks! Resolving this issue is not limited to codeowners! If you want to help us out, feel free to resolve this issue! Thanks already!
non_main
nzbget is missing brand images the problem the nzbget integration does not have brand images in this repository we recently started this brands repository to create a centralized storage of all brand related images these images are used on our website and the home assistant frontend the following images are missing and would ideally be added src nzbget icon png src nzbget logo png src nzbget icon png src nzbget logo png for image specifications and requirements please see updating the documentation repository our documentation repository already has a logo for this integration however it does not meet the image requirements of this new brands repository if adding images to this repository please open up a pr to the documentation repository as well removing the logo nzbget png line from this file note the documentation pr needs to be opened against the current branch please leave the actual logo file in the documentation repository it will be cleaned up differently additional information for more information about this repository read the file of this repository it contains information on how this repository works and image specification and requirements codeowner mention hi there chriscla mind taking a look at this issue as it is with an integration nzbget you are listed as a for thanks resolving this issue is not limited to codeowners if you want to help us out feel free to resolve this issue thanks already
0
2,323
8,308,913,394
IssuesEvent
2018-09-24 01:55:48
invertase/react-native-firebase
https://api.github.com/repos/invertase/react-native-firebase
closed
Re: Firestore call causes folly::toJson error
add-test-case await-maintainer-feedback firestore ios ๐Ÿž bug ๐Ÿ‘‰ await-user-feedback
### Issue Hello. I'm trying to use this library in conjunction with react-native-maps. The later package (which is in the community react repos!) is insistent on using Cocoapods and in fact doesn't work using `react-native link` (reference: https://github.com/react-community/react-native-maps/blob/master/docs/installation.md )... which sods law is one of the unrecommended ways to use this library... Bit of a predicment i decided to follow the Cocoapods route as I've used this Firebase library on couple of projects this way and haven't noticed any issues... till now... So i can get the app to run, the maps to load and even initially get data from the Firebase Firestore database (even though it gives me red screen of death - see below). Trying to figure out where this error is coming from, it simply is this one line of code of me just getting reference of one of collection in the Firestore DB: ``` const { firestore: firestoreSingleton } = firebase const firestore = firestoreSingleton() const ref = firestore.collection('SomeCollection') export const getSomething = async () => { const data = await ref.get() // THIS LINE SHOWS THE RED SCREEN OF DEATH return data } ``` This is my pod file: ``` target 'App' do rn_path = '../node_modules/react-native' rn_maps_path = '../node_modules/react-native-maps' # See http://facebook.github.io/react-native/docs/integration-with-existing-apps.html#configuring-cocoapods-dependencies pod 'yoga', path: "#{rn_path}/ReactCommon/yoga/yoga.podspec" pod 'React', path: rn_path, subspecs: [ 'Core', 'CxxBridge', 'DevSupport', 'RCTActionSheet', 'RCTAnimation', 'RCTGeolocation', 'RCTImage', 'RCTLinkingIOS', 'RCTNetwork', 'RCTSettings', 'RCTText', 'RCTVibration', 'RCTWebSocket', ] # React Native third party dependencies podspecs pod 'DoubleConversion', :podspec => "#{rn_path}/third-party-podspecs/DoubleConversion.podspec" pod 'glog', :podspec => "#{rn_path}/third-party-podspecs/glog.podspec" # If you are using React Native <0.54, you will get the following error: # "The name of the given podspec `GLog` doesn't match the expected one `glog`" # Use the following line instead: #pod 'GLog', :podspec => "#{rn_path}/third-party-podspecs/GLog.podspec" pod 'Folly', :podspec => "#{rn_path}/third-party-podspecs/Folly.podspec" # react-native-maps dependencies pod 'react-native-maps', path: rn_maps_path # pod 'react-native-google-maps', path: rn_maps_path # Remove this line if you don't want to support GoogleMaps on iOS # pod 'GoogleMaps' # Remove this line if you don't want to support GoogleMaps on iOS # pod 'Google-Maps-iOS-Utils' # Remove this line if you don't want to support GoogleMaps on iOS # Firebase pod 'Firebase/Core', '~> 4.13.0' pod 'Firebase/Firestore' # pod 'Fabric', '~> 1.7.6't # pod 'Crashlytics', '~> 3.10.1' pod 'RNFirebase', :path => '../node_modules/react-native-firebase/ios' # pod 'RNFS', :path => '../node_modules/react-native-fs' pod 'react-native-fast-image', :path => '../node_modules/react-native-fast-image' pod 'RNSVG', :path => '../node_modules/react-native-svg' end post_install do |installer| installer.pods_project.targets.each do |target| if target.name == 'react-native-google-maps' target.build_configurations.each do |config| config.build_settings['CLANG_ENABLE_MODULES'] = 'No' end end if target.name == "React" target.remove_from_project end end end ``` <!--- Please write your issue here, provide as much detail as you can, code snippets, key files which will help us to debug such as your `Podfile` and/or `app/build.gradle` file). --> ### Environment 1. Application Target Platform: **iOS only.** 2. Development Operating System: **macOS High Sierra** 3. Build Tools: **Xcode Version 9.4** 4. `React Native` version: **0.54.4** 5. `React Native Firebase` Version: **4.1.0** 6. `Firebase` Module: **Core & Firestore** 7. Are you using `typescript`? No This is the red screen of death error: ![simulator screen shot - iphone x - 2018-07-30 at 15 18 17](https://user-images.githubusercontent.com/10895271/43404200-d137097a-940e-11e8-8fb6-e105c4d48928.png)
True
Re: Firestore call causes folly::toJson error - ### Issue Hello. I'm trying to use this library in conjunction with react-native-maps. The later package (which is in the community react repos!) is insistent on using Cocoapods and in fact doesn't work using `react-native link` (reference: https://github.com/react-community/react-native-maps/blob/master/docs/installation.md )... which sods law is one of the unrecommended ways to use this library... Bit of a predicment i decided to follow the Cocoapods route as I've used this Firebase library on couple of projects this way and haven't noticed any issues... till now... So i can get the app to run, the maps to load and even initially get data from the Firebase Firestore database (even though it gives me red screen of death - see below). Trying to figure out where this error is coming from, it simply is this one line of code of me just getting reference of one of collection in the Firestore DB: ``` const { firestore: firestoreSingleton } = firebase const firestore = firestoreSingleton() const ref = firestore.collection('SomeCollection') export const getSomething = async () => { const data = await ref.get() // THIS LINE SHOWS THE RED SCREEN OF DEATH return data } ``` This is my pod file: ``` target 'App' do rn_path = '../node_modules/react-native' rn_maps_path = '../node_modules/react-native-maps' # See http://facebook.github.io/react-native/docs/integration-with-existing-apps.html#configuring-cocoapods-dependencies pod 'yoga', path: "#{rn_path}/ReactCommon/yoga/yoga.podspec" pod 'React', path: rn_path, subspecs: [ 'Core', 'CxxBridge', 'DevSupport', 'RCTActionSheet', 'RCTAnimation', 'RCTGeolocation', 'RCTImage', 'RCTLinkingIOS', 'RCTNetwork', 'RCTSettings', 'RCTText', 'RCTVibration', 'RCTWebSocket', ] # React Native third party dependencies podspecs pod 'DoubleConversion', :podspec => "#{rn_path}/third-party-podspecs/DoubleConversion.podspec" pod 'glog', :podspec => "#{rn_path}/third-party-podspecs/glog.podspec" # If you are using React Native <0.54, you will get the following error: # "The name of the given podspec `GLog` doesn't match the expected one `glog`" # Use the following line instead: #pod 'GLog', :podspec => "#{rn_path}/third-party-podspecs/GLog.podspec" pod 'Folly', :podspec => "#{rn_path}/third-party-podspecs/Folly.podspec" # react-native-maps dependencies pod 'react-native-maps', path: rn_maps_path # pod 'react-native-google-maps', path: rn_maps_path # Remove this line if you don't want to support GoogleMaps on iOS # pod 'GoogleMaps' # Remove this line if you don't want to support GoogleMaps on iOS # pod 'Google-Maps-iOS-Utils' # Remove this line if you don't want to support GoogleMaps on iOS # Firebase pod 'Firebase/Core', '~> 4.13.0' pod 'Firebase/Firestore' # pod 'Fabric', '~> 1.7.6't # pod 'Crashlytics', '~> 3.10.1' pod 'RNFirebase', :path => '../node_modules/react-native-firebase/ios' # pod 'RNFS', :path => '../node_modules/react-native-fs' pod 'react-native-fast-image', :path => '../node_modules/react-native-fast-image' pod 'RNSVG', :path => '../node_modules/react-native-svg' end post_install do |installer| installer.pods_project.targets.each do |target| if target.name == 'react-native-google-maps' target.build_configurations.each do |config| config.build_settings['CLANG_ENABLE_MODULES'] = 'No' end end if target.name == "React" target.remove_from_project end end end ``` <!--- Please write your issue here, provide as much detail as you can, code snippets, key files which will help us to debug such as your `Podfile` and/or `app/build.gradle` file). --> ### Environment 1. Application Target Platform: **iOS only.** 2. Development Operating System: **macOS High Sierra** 3. Build Tools: **Xcode Version 9.4** 4. `React Native` version: **0.54.4** 5. `React Native Firebase` Version: **4.1.0** 6. `Firebase` Module: **Core & Firestore** 7. Are you using `typescript`? No This is the red screen of death error: ![simulator screen shot - iphone x - 2018-07-30 at 15 18 17](https://user-images.githubusercontent.com/10895271/43404200-d137097a-940e-11e8-8fb6-e105c4d48928.png)
main
re firestore call causes folly tojson error issue hello i m trying to use this library in conjunction with react native maps the later package which is in the community react repos is insistent on using cocoapods and in fact doesn t work using react native link reference which sods law is one of the unrecommended ways to use this library bit of a predicment i decided to follow the cocoapods route as i ve used this firebase library on couple of projects this way and haven t noticed any issues till now so i can get the app to run the maps to load and even initially get data from the firebase firestore database even though it gives me red screen of death see below trying to figure out where this error is coming from it simply is this one line of code of me just getting reference of one of collection in the firestore db const firestore firestoresingleton firebase const firestore firestoresingleton const ref firestore collection somecollection export const getsomething async const data await ref get this line shows the red screen of death return data this is my pod file target app do rn path node modules react native rn maps path node modules react native maps see pod yoga path rn path reactcommon yoga yoga podspec pod react path rn path subspecs core cxxbridge devsupport rctactionsheet rctanimation rctgeolocation rctimage rctlinkingios rctnetwork rctsettings rcttext rctvibration rctwebsocket react native third party dependencies podspecs pod doubleconversion podspec rn path third party podspecs doubleconversion podspec pod glog podspec rn path third party podspecs glog podspec if you are using react native you will get the following error the name of the given podspec glog doesn t match the expected one glog use the following line instead pod glog podspec rn path third party podspecs glog podspec pod folly podspec rn path third party podspecs folly podspec react native maps dependencies pod react native maps path rn maps path pod react native google maps path rn maps path remove this line if you don t want to support googlemaps on ios pod googlemaps remove this line if you don t want to support googlemaps on ios pod google maps ios utils remove this line if you don t want to support googlemaps on ios firebase pod firebase core pod firebase firestore pod fabric t pod crashlytics pod rnfirebase path node modules react native firebase ios pod rnfs path node modules react native fs pod react native fast image path node modules react native fast image pod rnsvg path node modules react native svg end post install do installer installer pods project targets each do target if target name react native google maps target build configurations each do config config build settings no end end if target name react target remove from project end end end environment application target platform ios only development operating system macos high sierra build tools xcode version react native version react native firebase version firebase module core firestore are you using typescript no this is the red screen of death error
1
1,857
6,577,407,463
IssuesEvent
2017-09-12 00:42:00
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
os_router: All interfaces get detached, then re-attached on router update
affects_2.0 bug_report cloud openstack waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_router.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### OS / ENVIRONMENT NA ##### SUMMARY On a router update, all the of the interfaces are detached, then the new set are attached. This causes issues with network stability and requires running expensive api calls for each port. It also causes issues with environments running the l3 ha keepalived vrrp driver. Ports would be detached then attached so fast that the keepalived driver couldn't keep up. This would cause the l3 agent to hang, rendering all l3 services to be unavailable. ##### STEPS TO REPRODUCE 1. Create a router with internal interfaces using the os_router.py module. 2. Update the internal interfaces list by adding and/or deleting interfaces, or making any other change to it's configurations 3. Re-run the playbook. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L327-L334 ##### EXPECTED RESULTS All of the internal router interfaces will be detached from the router, then the new set will be attached.
True
os_router: All interfaces get detached, then re-attached on router update - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME os_router.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### OS / ENVIRONMENT NA ##### SUMMARY On a router update, all the of the interfaces are detached, then the new set are attached. This causes issues with network stability and requires running expensive api calls for each port. It also causes issues with environments running the l3 ha keepalived vrrp driver. Ports would be detached then attached so fast that the keepalived driver couldn't keep up. This would cause the l3 agent to hang, rendering all l3 services to be unavailable. ##### STEPS TO REPRODUCE 1. Create a router with internal interfaces using the os_router.py module. 2. Update the internal interfaces list by adding and/or deleting interfaces, or making any other change to it's configurations 3. Re-run the playbook. See: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L327-L334 ##### EXPECTED RESULTS All of the internal router interfaces will be detached from the router, then the new set will be attached.
main
os router all interfaces get detached then re attached on router update issue type bug report component name os router py ansible version ansible os environment na summary on a router update all the of the interfaces are detached then the new set are attached this causes issues with network stability and requires running expensive api calls for each port it also causes issues with environments running the ha keepalived vrrp driver ports would be detached then attached so fast that the keepalived driver couldn t keep up this would cause the agent to hang rendering all services to be unavailable steps to reproduce create a router with internal interfaces using the os router py module update the internal interfaces list by adding and or deleting interfaces or making any other change to it s configurations re run the playbook see expected results all of the internal router interfaces will be detached from the router then the new set will be attached
1
592,827
17,931,783,245
IssuesEvent
2021-09-10 10:12:24
cyntaria/UniPal-Backend
https://api.github.com/repos/cyntaria/UniPal-Backend
opened
As a student, I should be able to get all possible campus spots, so that I can choose one while organizing an activity
Priority: Low Status: Pending user story Type: Feature
### Summary As a `student`, I should be able to **get all possible campus spots**, so that I can **choose one while organizing an activity**. ### Acceptance Criteria **GIVEN** a `student` is *requesting all possible campus spots* in the app **WHEN** the app hits the `/campus-spots` endpoint with a valid GET request **THEN** the app should receive a status `200` **AND** in the response, the following information should be returned: - headers - list of campus spots Sample Request/Sample Response ``` headers: { error: 0, message: "..." } body: [ { campus_spot_id: 0, campus_spot: "Tabba Left Wing" }, { campus_spot_id: 1, campus_spot: "Tabba Right Wing" }, .... ] ``` ### Resources - Development URL: {Here goes a URL to the feature on development API} - Production URL: {Here goes a URL to the feature on production API} ### Dev Notes This endpoint is going to be accessible and work the same way for the admin as well. ### Testing Notes ##### Scenario 1: GET request is successful **GIVEN** a `student` is *requesting all possible campus spots* in the app **WHEN** the app hits the `/campus-spots` endpoint with a valid GET request **THEN** the app should receive a status ***200*** **AND** the body should be an array **AND** the first item of the array should be an object containing the following fields: - campus_spot_id - campus_spot ##### Scenario 2: GET request is unsuccessful **GIVEN** a `student` is *requesting all possible campus spots* in the app **WHEN** the app hits the `/campus-spots` endpoint with a valid GET request **THEN** the app should receive a status `404` **AND** the response headers' `code` parameter should contain "**_NotFoundException_**" #### Scenario 3: GET request is forbidden **GIVEN** a `student` is *requesting all possible campus spots* in the app **WHEN** the app hits the `/campus-spots` endpoint with a valid GET request **AND** the request contains no **authorization token** **THEN** the app should receive a status `401` **AND** the response headers' `code` parameter should contain "**_TokenMissingException_**"
1.0
As a student, I should be able to get all possible campus spots, so that I can choose one while organizing an activity - ### Summary As a `student`, I should be able to **get all possible campus spots**, so that I can **choose one while organizing an activity**. ### Acceptance Criteria **GIVEN** a `student` is *requesting all possible campus spots* in the app **WHEN** the app hits the `/campus-spots` endpoint with a valid GET request **THEN** the app should receive a status `200` **AND** in the response, the following information should be returned: - headers - list of campus spots Sample Request/Sample Response ``` headers: { error: 0, message: "..." } body: [ { campus_spot_id: 0, campus_spot: "Tabba Left Wing" }, { campus_spot_id: 1, campus_spot: "Tabba Right Wing" }, .... ] ``` ### Resources - Development URL: {Here goes a URL to the feature on development API} - Production URL: {Here goes a URL to the feature on production API} ### Dev Notes This endpoint is going to be accessible and work the same way for the admin as well. ### Testing Notes ##### Scenario 1: GET request is successful **GIVEN** a `student` is *requesting all possible campus spots* in the app **WHEN** the app hits the `/campus-spots` endpoint with a valid GET request **THEN** the app should receive a status ***200*** **AND** the body should be an array **AND** the first item of the array should be an object containing the following fields: - campus_spot_id - campus_spot ##### Scenario 2: GET request is unsuccessful **GIVEN** a `student` is *requesting all possible campus spots* in the app **WHEN** the app hits the `/campus-spots` endpoint with a valid GET request **THEN** the app should receive a status `404` **AND** the response headers' `code` parameter should contain "**_NotFoundException_**" #### Scenario 3: GET request is forbidden **GIVEN** a `student` is *requesting all possible campus spots* in the app **WHEN** the app hits the `/campus-spots` endpoint with a valid GET request **AND** the request contains no **authorization token** **THEN** the app should receive a status `401` **AND** the response headers' `code` parameter should contain "**_TokenMissingException_**"
non_main
as a student i should be able to get all possible campus spots so that i can choose one while organizing an activity summary as a student i should be able to get all possible campus spots so that i can choose one while organizing an activity acceptance criteria given a student is requesting all possible campus spots in the app when the app hits the campus spots endpoint with a valid get request then the app should receive a status and in the response the following information should be returned headers list of campus spots sample request sample response headers error message body campus spot id campus spot tabba left wing campus spot id campus spot tabba right wing resources development url here goes a url to the feature on development api production url here goes a url to the feature on production api dev notes this endpoint is going to be accessible and work the same way for the admin as well testing notes scenario get request is successful given a student is requesting all possible campus spots in the app when the app hits the campus spots endpoint with a valid get request then the app should receive a status and the body should be an array and the first item of the array should be an object containing the following fields campus spot id campus spot scenario get request is unsuccessful given a student is requesting all possible campus spots in the app when the app hits the campus spots endpoint with a valid get request then the app should receive a status and the response headers code parameter should contain notfoundexception scenario get request is forbidden given a student is requesting all possible campus spots in the app when the app hits the campus spots endpoint with a valid get request and the request contains no authorization token then the app should receive a status and the response headers code parameter should contain tokenmissingexception
0
19,944
14,766,587,323
IssuesEvent
2021-01-10 01:08:13
NCAR/VAPOR
https://api.github.com/repos/NCAR/VAPOR
reopened
Disagreement between TF widget and Colorbar
High Usability
In this case, I created a slice renderer for `dbz` for Lee Orf's tornado dataset with the default TF and added a colorbar. ![screen shot 2019-01-24 at 10 20 05 pm](https://user-images.githubusercontent.com/2772687/51726774-46316880-2026-11e9-9023-56ecdbd67a6d.png)
True
Disagreement between TF widget and Colorbar - In this case, I created a slice renderer for `dbz` for Lee Orf's tornado dataset with the default TF and added a colorbar. ![screen shot 2019-01-24 at 10 20 05 pm](https://user-images.githubusercontent.com/2772687/51726774-46316880-2026-11e9-9023-56ecdbd67a6d.png)
non_main
disagreement between tf widget and colorbar in this case i created a slice renderer for dbz for lee orf s tornado dataset with the default tf and added a colorbar
0
2,086
7,094,224,805
IssuesEvent
2018-01-13 00:45:40
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
VLC Install Failed Command
awaiting maintainer feedback
#### General troubleshooting steps - [x] I have retried my command with `--force` and the issue is still present. - [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [X] None of the templates was appropriate for my issue, or Iโ€™m not sure. - [X] I ran `brew update-reset && brew update` and retried my command. - [X] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue When attempting to install VLC, a command fails. #### Output of your command with `--verbose --debug` ``` MacBook-Pro:~ max.schaefer$ brew cask install --verbose --debug vlc ==> Hbc::Installer#install ==> Printing caveats ==> Hbc::Installer#fetch ==> Satisfying dependencies ==> Downloading ==> Downloading https://get.videolan.org/vlc/2.2.8/macosx/vlc-2.2.8.dmg Already downloaded: /Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg ==> Downloaded to -> /Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg ==> Verifying download ==> Determining which verifications to run for Cask vlc ==> Checking for verification class Hbc::Verify::Checksum ==> 1 verifications defined Hbc::Verify::Checksum ==> Running verification of class Hbc::Verify::Checksum ==> Verifying checksum for Cask vlc ==> SHA256 checksums match ==> Installing Cask vlc ==> Hbc::Installer#stage ==> Extracting primary container ==> Determining which containers to use based on filetype ==> Checking container class Hbc::Container::Pkg ==> Checking container class Hbc::Container::Ttf ==> Checking container class Hbc::Container::Otf ==> Checking container class Hbc::Container::Air ==> Checking container class Hbc::Container::Cab ==> Checking container class Hbc::Container::Dmg ==> Executing: ["/usr/bin/hdiutil", "imageinfo", "/Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg"] ==> Using container class Hbc::Container::Dmg for /Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg ==> Executing: ["/usr/bin/hdiutil", "attach", "-plist", "-nobrowse", "-readonly", "-noidme", "-mountrandom", "/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/d20180109-13167-xol0xh", "/Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg"] ==> Executing: ["/usr/bin/find", ".", "-print0"] ==> Executing: ["/usr/bin/mkbom", "-s", "-i", "/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/20180109-13167-1s9gwuu.list", "--", "/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/20180109-13167-n3xgj9.bom"] ==> Executing: ["/usr/bin/ditto", "--bom", "/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/20180109-13167-n3xgj9.bom", "--", "/private/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/d20180109-13167-xol0xh/dmg.AyBe61", "/usr/local/Caskroom/vlc/2.2.8"] ==> Executing: ["/usr/sbin/diskutil", "eject", "/private/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/d20180109-13167-xol0xh/dmg.AyBe61"] ==> Creating metadata directory /usr/local/Caskroom/vlc/.metadata/2.2.8/20180109040317.527. ==> Creating metadata subdirectory /usr/local/Caskroom/vlc/.metadata/2.2.8/20180109040317.527/Casks. ==> Installing artifacts ==> 4 artifact/s defined #<SortedSet:0x007fd549033c40> ==> Installing artifact of class Hbc::Artifact::PreflightBlock ==> Installing artifact of class Hbc::Artifact::App ==> Moving App 'VLC.app' to '/Applications/VLC.app'. ==> Installing artifact of class Hbc::Artifact::Binary ==> Linking Binary 'vlc.wrapper.sh' to '/usr/local/bin/vlc'. ==> Executing: ["/bin/ln", "-h", "-f", "-s", "--", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh", "/usr/local/bin/vlc"] ==> Adding com.apple.metadata:kMDItemAlternateNames metadata ==> Executing: ["/usr/bin/xattr", "-p", "com.apple.metadata:kMDItemAlternateNames", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh"] ==> Existing metadata is: '' ==> Executing: ["/bin/chmod", "--", "u+rw", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh"] ==> Executing: ["/usr/bin/xattr", "-w", "com.apple.metadata:kMDItemAlternateNames", "(\"vlc\")", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh"] ==> Reverting installation of artifact of class Hbc::Artifact::App ==> Moving App 'VLC.app' back to '/usr/local/Caskroom/vlc/2.2.8/VLC.app'. ==> Reverting installation of artifact of class Hbc::Artifact::PreflightBlock ==> Purging files for version 2.2.8 of Cask vlc Error: Command failed to execute! ==> Failed command: /usr/bin/xattr -w com.apple.metadata:kMDItemAlternateNames ("vlc") /usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh ==> Standard Output of failed command: ==> Standard Error of failed command: Traceback (most recent call last): File "/usr/bin/xattr-2.7", line 7, in <module> from pkg_resources import load_entry_point File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3095, in <module> @_call_aside File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3081, in _call_aside f(*args, **kwargs) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3108, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 658, in _build_master ws.require(__requires__) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 959, in require needed = self.resolve(parse_requirements(requirements)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 846, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'xattr==0.6.4' distribution was not found and is required by the application ==> Exit status of failed command: #<Process::Status: pid 13309 exit 1> /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:70:in `assert_success' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:36:in `run!' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:14:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:18:in `run!' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/relocated.rb:70:in `add_altname_metadata' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/symlinked.rb:60:in `create_filesystem_link' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/symlinked.rb:48:in `link' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/binary.rb:7:in `link' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/symlinked.rb:15:in `install_phase' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:174:in `block in install_artifacts' /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/set.rb:674:in `each' /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/set.rb:674:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:166:in `install_artifacts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:80:in `install' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:20:in `block in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:14:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:14:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:98:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:168:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:100:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:173:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:173:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:156:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:100:in `<main>' ``` #### Output of `brew cask doctor` ``` MacBook-Pro:~ max.schaefer$ brew cask doctor ==> Homebrew-Cask Version Homebrew-Cask 1.4.3 caskroom/homebrew-cask (git revision dd5e7; last commit 2018-01-09) ==> macOS 10.13.2 ==> Java N/A ==> Homebrew-Cask Install Location <NONE> ==> Homebrew-Cask Staging Location /usr/local/Caskroom ==> Homebrew-Cask Cached Downloads ~/Library/Caches/Homebrew/Cask (1 files, 35.1MB) ==> Homebrew-Cask Taps: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3888 casks) /usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (175 casks) ==> Contents of $LOAD_PATH /usr/local/Homebrew/Library/Homebrew/cask/lib /usr/local/Homebrew/Library/Homebrew /Library/Ruby/Gems/2.3.0/gems/did_you_mean-1.0.0/lib /Library/Ruby/Site/2.3.0 /Library/Ruby/Site/2.3.0/x86_64-darwin17 /Library/Ruby/Site/2.3.0/universal-darwin17 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0/x86_64-darwin17 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0/universal-darwin17 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/x86_64-darwin17 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/universal-darwin17 ==> Environment Variables LC_ALL="en_US.UTF-8" PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Homebrew/Library/Homebrew/shims/scm" SHELL="/usr/local/bin/bash" ```
True
VLC Install Failed Command - #### General troubleshooting steps - [x] I have retried my command with `--force` and the issue is still present. - [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [X] None of the templates was appropriate for my issue, or Iโ€™m not sure. - [X] I ran `brew update-reset && brew update` and retried my command. - [X] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue When attempting to install VLC, a command fails. #### Output of your command with `--verbose --debug` ``` MacBook-Pro:~ max.schaefer$ brew cask install --verbose --debug vlc ==> Hbc::Installer#install ==> Printing caveats ==> Hbc::Installer#fetch ==> Satisfying dependencies ==> Downloading ==> Downloading https://get.videolan.org/vlc/2.2.8/macosx/vlc-2.2.8.dmg Already downloaded: /Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg ==> Downloaded to -> /Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg ==> Verifying download ==> Determining which verifications to run for Cask vlc ==> Checking for verification class Hbc::Verify::Checksum ==> 1 verifications defined Hbc::Verify::Checksum ==> Running verification of class Hbc::Verify::Checksum ==> Verifying checksum for Cask vlc ==> SHA256 checksums match ==> Installing Cask vlc ==> Hbc::Installer#stage ==> Extracting primary container ==> Determining which containers to use based on filetype ==> Checking container class Hbc::Container::Pkg ==> Checking container class Hbc::Container::Ttf ==> Checking container class Hbc::Container::Otf ==> Checking container class Hbc::Container::Air ==> Checking container class Hbc::Container::Cab ==> Checking container class Hbc::Container::Dmg ==> Executing: ["/usr/bin/hdiutil", "imageinfo", "/Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg"] ==> Using container class Hbc::Container::Dmg for /Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg ==> Executing: ["/usr/bin/hdiutil", "attach", "-plist", "-nobrowse", "-readonly", "-noidme", "-mountrandom", "/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/d20180109-13167-xol0xh", "/Users/max.schaefer/Library/Caches/Homebrew/Cask/vlc--2.2.8.dmg"] ==> Executing: ["/usr/bin/find", ".", "-print0"] ==> Executing: ["/usr/bin/mkbom", "-s", "-i", "/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/20180109-13167-1s9gwuu.list", "--", "/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/20180109-13167-n3xgj9.bom"] ==> Executing: ["/usr/bin/ditto", "--bom", "/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/20180109-13167-n3xgj9.bom", "--", "/private/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/d20180109-13167-xol0xh/dmg.AyBe61", "/usr/local/Caskroom/vlc/2.2.8"] ==> Executing: ["/usr/sbin/diskutil", "eject", "/private/var/folders/qt/m073xcf17fx_8x19w50nnknh0000gn/T/d20180109-13167-xol0xh/dmg.AyBe61"] ==> Creating metadata directory /usr/local/Caskroom/vlc/.metadata/2.2.8/20180109040317.527. ==> Creating metadata subdirectory /usr/local/Caskroom/vlc/.metadata/2.2.8/20180109040317.527/Casks. ==> Installing artifacts ==> 4 artifact/s defined #<SortedSet:0x007fd549033c40> ==> Installing artifact of class Hbc::Artifact::PreflightBlock ==> Installing artifact of class Hbc::Artifact::App ==> Moving App 'VLC.app' to '/Applications/VLC.app'. ==> Installing artifact of class Hbc::Artifact::Binary ==> Linking Binary 'vlc.wrapper.sh' to '/usr/local/bin/vlc'. ==> Executing: ["/bin/ln", "-h", "-f", "-s", "--", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh", "/usr/local/bin/vlc"] ==> Adding com.apple.metadata:kMDItemAlternateNames metadata ==> Executing: ["/usr/bin/xattr", "-p", "com.apple.metadata:kMDItemAlternateNames", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh"] ==> Existing metadata is: '' ==> Executing: ["/bin/chmod", "--", "u+rw", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh"] ==> Executing: ["/usr/bin/xattr", "-w", "com.apple.metadata:kMDItemAlternateNames", "(\"vlc\")", "/usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh"] ==> Reverting installation of artifact of class Hbc::Artifact::App ==> Moving App 'VLC.app' back to '/usr/local/Caskroom/vlc/2.2.8/VLC.app'. ==> Reverting installation of artifact of class Hbc::Artifact::PreflightBlock ==> Purging files for version 2.2.8 of Cask vlc Error: Command failed to execute! ==> Failed command: /usr/bin/xattr -w com.apple.metadata:kMDItemAlternateNames ("vlc") /usr/local/Caskroom/vlc/2.2.8/vlc.wrapper.sh ==> Standard Output of failed command: ==> Standard Error of failed command: Traceback (most recent call last): File "/usr/bin/xattr-2.7", line 7, in <module> from pkg_resources import load_entry_point File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3095, in <module> @_call_aside File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3081, in _call_aside f(*args, **kwargs) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 3108, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 658, in _build_master ws.require(__requires__) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 959, in require needed = self.resolve(parse_requirements(requirements)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources/__init__.py", line 846, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'xattr==0.6.4' distribution was not found and is required by the application ==> Exit status of failed command: #<Process::Status: pid 13309 exit 1> /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:70:in `assert_success' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:36:in `run!' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:14:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/system_command.rb:18:in `run!' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/relocated.rb:70:in `add_altname_metadata' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/symlinked.rb:60:in `create_filesystem_link' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/symlinked.rb:48:in `link' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/binary.rb:7:in `link' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/artifact/symlinked.rb:15:in `install_phase' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:174:in `block in install_artifacts' /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/set.rb:674:in `each' /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/set.rb:674:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:166:in `install_artifacts' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/installer.rb:80:in `install' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:20:in `block in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:14:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/install.rb:14:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:98:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:168:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:100:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:173:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:173:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:156:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:100:in `<main>' ``` #### Output of `brew cask doctor` ``` MacBook-Pro:~ max.schaefer$ brew cask doctor ==> Homebrew-Cask Version Homebrew-Cask 1.4.3 caskroom/homebrew-cask (git revision dd5e7; last commit 2018-01-09) ==> macOS 10.13.2 ==> Java N/A ==> Homebrew-Cask Install Location <NONE> ==> Homebrew-Cask Staging Location /usr/local/Caskroom ==> Homebrew-Cask Cached Downloads ~/Library/Caches/Homebrew/Cask (1 files, 35.1MB) ==> Homebrew-Cask Taps: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3888 casks) /usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (175 casks) ==> Contents of $LOAD_PATH /usr/local/Homebrew/Library/Homebrew/cask/lib /usr/local/Homebrew/Library/Homebrew /Library/Ruby/Gems/2.3.0/gems/did_you_mean-1.0.0/lib /Library/Ruby/Site/2.3.0 /Library/Ruby/Site/2.3.0/x86_64-darwin17 /Library/Ruby/Site/2.3.0/universal-darwin17 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0/x86_64-darwin17 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby/2.3.0/universal-darwin17 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/x86_64-darwin17 /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib/ruby/2.3.0/universal-darwin17 ==> Environment Variables LC_ALL="en_US.UTF-8" PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Homebrew/Library/Homebrew/shims/scm" SHELL="/usr/local/bin/bash" ```
main
vlc install failed command general troubleshooting steps i have retried my command with force and the issue is still present i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or iโ€™m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue when attempting to install vlc a command fails output of your command with verbose debug macbook pro max schaefer brew cask install verbose debug vlc hbc installer install printing caveats hbc installer fetch satisfying dependencies downloading downloading already downloaded users max schaefer library caches homebrew cask vlc dmg downloaded to users max schaefer library caches homebrew cask vlc dmg verifying download determining which verifications to run for cask vlc checking for verification class hbc verify checksum verifications defined hbc verify checksum running verification of class hbc verify checksum verifying checksum for cask vlc checksums match installing cask vlc hbc installer stage extracting primary container determining which containers to use based on filetype checking container class hbc container pkg checking container class hbc container ttf checking container class hbc container otf checking container class hbc container air checking container class hbc container cab checking container class hbc container dmg executing using container class hbc container dmg for users max schaefer library caches homebrew cask vlc dmg executing executing executing executing executing creating metadata directory usr local caskroom vlc metadata creating metadata subdirectory usr local caskroom vlc metadata casks installing artifacts artifact s defined installing artifact of class hbc artifact preflightblock installing artifact of class hbc artifact app moving app vlc app to applications vlc app installing artifact of class hbc artifact binary linking binary vlc wrapper sh to usr local bin vlc executing adding com apple metadata kmditemalternatenames metadata executing existing metadata is executing executing reverting installation of artifact of class hbc artifact app moving app vlc app back to usr local caskroom vlc vlc app reverting installation of artifact of class hbc artifact preflightblock purging files for version of cask vlc error command failed to execute failed command usr bin xattr w com apple metadata kmditemalternatenames vlc usr local caskroom vlc vlc wrapper sh standard output of failed command standard error of failed command traceback most recent call last file usr bin xattr line in from pkg resources import load entry point file system library frameworks python framework versions extras lib python pkg resources init py line in call aside file system library frameworks python framework versions extras lib python pkg resources init py line in call aside f args kwargs file system library frameworks python framework versions extras lib python pkg resources init py line in initialize master working set working set workingset build master file system library frameworks python framework versions extras lib python pkg resources init py line in build master ws require requires file system library frameworks python framework versions extras lib python pkg resources init py line in require needed self resolve parse requirements requirements file system library frameworks python framework versions extras lib python pkg resources init py line in resolve raise distributionnotfound req requirers pkg resources distributionnotfound the xattr distribution was not found and is required by the application exit status of failed command usr local homebrew library homebrew cask lib hbc system command rb in assert success usr local homebrew library homebrew cask lib hbc system command rb in run usr local homebrew library homebrew cask lib hbc system command rb in run usr local homebrew library homebrew cask lib hbc system command rb in run usr local homebrew library homebrew cask lib hbc artifact relocated rb in add altname metadata usr local homebrew library homebrew cask lib hbc artifact symlinked rb in create filesystem link usr local homebrew library homebrew cask lib hbc artifact symlinked rb in link usr local homebrew library homebrew cask lib hbc artifact binary rb in link usr local homebrew library homebrew cask lib hbc artifact symlinked rb in install phase usr local homebrew library homebrew cask lib hbc installer rb in block in install artifacts system library frameworks ruby framework versions usr lib ruby set rb in each system library frameworks ruby framework versions usr lib ruby set rb in each usr local homebrew library homebrew cask lib hbc installer rb in install artifacts usr local homebrew library homebrew cask lib hbc installer rb in install usr local homebrew library homebrew cask lib hbc cli install rb in block in run usr local homebrew library homebrew cask lib hbc cli install rb in each usr local homebrew library homebrew cask lib hbc cli install rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor macbook pro max schaefer brew cask doctor homebrew cask version homebrew cask caskroom homebrew cask git revision last commit macos java n a homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask files homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks usr local homebrew library taps caskroom homebrew versions casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew library ruby gems gems did you mean lib library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal environment variables lc all en us utf path usr bin bin usr sbin sbin usr local homebrew library homebrew shims scm shell usr local bin bash
1
2,187
7,725,580,770
IssuesEvent
2018-05-24 18:25:55
NervanaSystems/ngraph-mxnet
https://api.github.com/repos/NervanaSystems/ngraph-mxnet
opened
Have bridge code use `origin_node_->attrs.parsed` instead of `origin_node_->attrs.dict`
enhancement maintainability
E.g., in `src/ngraph/ngraph_graph.cc`: ``` c++ #include "../../src/operator/nn/deconvolution-inl.h" ... const auto & op_params = dmlc::get<mxnet::op::DeconvolutionParam>(orig_node_->attrs.parsed); ... op_params.stride ... ``` We can use this approach for all new bridge code, and optionally revise existing bridge code to use this approach. Pros: - We avoid re-parsing strings to more explicit C++ types. - We avoid duplicating other MXnet code that sets defaults, which is a potential source of bugs. Cons: - We'll need one extra `#include` to access each op's `...Params` struct.
True
Have bridge code use `origin_node_->attrs.parsed` instead of `origin_node_->attrs.dict` - E.g., in `src/ngraph/ngraph_graph.cc`: ``` c++ #include "../../src/operator/nn/deconvolution-inl.h" ... const auto & op_params = dmlc::get<mxnet::op::DeconvolutionParam>(orig_node_->attrs.parsed); ... op_params.stride ... ``` We can use this approach for all new bridge code, and optionally revise existing bridge code to use this approach. Pros: - We avoid re-parsing strings to more explicit C++ types. - We avoid duplicating other MXnet code that sets defaults, which is a potential source of bugs. Cons: - We'll need one extra `#include` to access each op's `...Params` struct.
main
have bridge code use origin node attrs parsed instead of origin node attrs dict e g in src ngraph ngraph graph cc c include src operator nn deconvolution inl h const auto op params dmlc get orig node attrs parsed op params stride we can use this approach for all new bridge code and optionally revise existing bridge code to use this approach pros we avoid re parsing strings to more explicit c types we avoid duplicating other mxnet code that sets defaults which is a potential source of bugs cons we ll need one extra include to access each op s params struct
1
4,827
24,879,501,871
IssuesEvent
2022-10-27 22:44:03
jesus2099/konami-command
https://api.github.com/repos/jesus2099/konami-command
closed
Now all merge pages use table.tbl and no ul
ninja server change mb_MERGE-HELPOR-2 maintainability
Now even Area and Artist merge pages are using `table.tbl` and no longer `ul`. ``` /* entity merge pages progressively abandon ul layout in favour of table.tbl * area ul (but only for admins) * artist ul * event table.tbl * label table.tbl * place table.tbl * recording table.tbl * release table.tbl * release group table.tbl * series table.tbl * work table.tbl * what elseย ? */ ``` I can drop some dead code.
True
Now all merge pages use table.tbl and no ul - Now even Area and Artist merge pages are using `table.tbl` and no longer `ul`. ``` /* entity merge pages progressively abandon ul layout in favour of table.tbl * area ul (but only for admins) * artist ul * event table.tbl * label table.tbl * place table.tbl * recording table.tbl * release table.tbl * release group table.tbl * series table.tbl * work table.tbl * what elseย ? */ ``` I can drop some dead code.
main
now all merge pages use table tbl and no ul now even area and artist merge pages are using table tbl and no longer ul entity merge pages progressively abandon ul layout in favour of table tbl area ul but only for admins artist ul event table tbl label table tbl place table tbl recording table tbl release table tbl release group table tbl series table tbl work table tbl what elseย  i can drop some dead code
1
3,205
12,236,610,547
IssuesEvent
2020-05-04 16:37:40
RockefellerArchiveCenter/aurora
https://api.github.com/repos/RockefellerArchiveCenter/aurora
closed
Use Django Rest Framework's native implementation of OpenAPI Schema
maintainability python3
## Is your feature request related to a problem? Please describe. As of version 3.10, Django Rest Framework now supports generation of OpenAPI schema. Using this built-in version would allow us to shed several dependencies, some of which are not well-maintained. ## Describe the solution you'd like Replace existing OpenAPI view and endpoint with DRF's implementation. ## Additional context See [https://github.com/RockefellerArchiveCenter/argo/blob/master/api_formatter/urls.py] for implementation of DRF OpenAPI schema.
True
Use Django Rest Framework's native implementation of OpenAPI Schema - ## Is your feature request related to a problem? Please describe. As of version 3.10, Django Rest Framework now supports generation of OpenAPI schema. Using this built-in version would allow us to shed several dependencies, some of which are not well-maintained. ## Describe the solution you'd like Replace existing OpenAPI view and endpoint with DRF's implementation. ## Additional context See [https://github.com/RockefellerArchiveCenter/argo/blob/master/api_formatter/urls.py] for implementation of DRF OpenAPI schema.
main
use django rest framework s native implementation of openapi schema is your feature request related to a problem please describe as of version django rest framework now supports generation of openapi schema using this built in version would allow us to shed several dependencies some of which are not well maintained describe the solution you d like replace existing openapi view and endpoint with drf s implementation additional context see for implementation of drf openapi schema
1
3,010
11,136,435,269
IssuesEvent
2019-12-20 16:35:29
precice/precice
https://api.github.com/repos/precice/precice
closed
Remove static state in Mesh
enhancement maintainability
To fully support multiple instances of the SolverInterface in one executable, we need to get rid of some `static` state. All [`mesh::PropertyContainers`](https://xgm.de/precice/docs/develop/classprecice_1_1mesh_1_1PropertyContainer.html) (`Mesh`, `Vertex`, `Edge`, `Triangle`) contain a static `utils::ManageUniqueIDs`. This should be owned by the `participant` itself and passed into its `Mesh`es. Related to #378 and #385
True
Remove static state in Mesh - To fully support multiple instances of the SolverInterface in one executable, we need to get rid of some `static` state. All [`mesh::PropertyContainers`](https://xgm.de/precice/docs/develop/classprecice_1_1mesh_1_1PropertyContainer.html) (`Mesh`, `Vertex`, `Edge`, `Triangle`) contain a static `utils::ManageUniqueIDs`. This should be owned by the `participant` itself and passed into its `Mesh`es. Related to #378 and #385
main
remove static state in mesh to fully support multiple instances of the solverinterface in one executable we need to get rid of some static state all mesh vertex edge triangle contain a static utils manageuniqueids this should be owned by the participant itself and passed into its mesh es related to and
1
590,373
17,777,061,684
IssuesEvent
2021-08-30 20:42:18
SkriptLang/Skript
https://api.github.com/repos/SkriptLang/Skript
closed
Wood aliases are wrong
enhancement priority: lowest completed aliases
### Skript/Server Version ``` Skript Version: 2.6-beta2 Server Version: git-Tuinity-18 (MC: 1.17.1) ``` ### Bug Description When I write: if player has 3 oak wood: - this gives error(can't understand this condition) When I write: if player has 3 stripped oak wood: - this works as expected ### Expected Behavior It was suppose to not send me this error and let me make a .sk file with "oak wood" ### Steps to Reproduce I just wrote the code ### Errors or Screenshots ``` https://prnt.sc/1dikvox there is an if infront but it doens't show in error I don't know why, but there is an if :) https://prnt.sc/1dil8ul ``` ### Other I want to ask how to check with arg 1 about how many items the player has. Like so: https://prnt.sc/1dilyr7 The other text is in Bulgarian, don't mind it :) ### Agreement - [X] I have read the guidelines above and confirm I am following them with this report.
1.0
Wood aliases are wrong - ### Skript/Server Version ``` Skript Version: 2.6-beta2 Server Version: git-Tuinity-18 (MC: 1.17.1) ``` ### Bug Description When I write: if player has 3 oak wood: - this gives error(can't understand this condition) When I write: if player has 3 stripped oak wood: - this works as expected ### Expected Behavior It was suppose to not send me this error and let me make a .sk file with "oak wood" ### Steps to Reproduce I just wrote the code ### Errors or Screenshots ``` https://prnt.sc/1dikvox there is an if infront but it doens't show in error I don't know why, but there is an if :) https://prnt.sc/1dil8ul ``` ### Other I want to ask how to check with arg 1 about how many items the player has. Like so: https://prnt.sc/1dilyr7 The other text is in Bulgarian, don't mind it :) ### Agreement - [X] I have read the guidelines above and confirm I am following them with this report.
non_main
wood aliases are wrong skript server version skript version server version git tuinity mc bug description when i write if player has oak wood this gives error can t understand this condition when i write if player has stripped oak wood this works as expected expected behavior it was suppose to not send me this error and let me make a sk file with oak wood steps to reproduce i just wrote the code errors or screenshots there is an if infront but it doens t show in error i don t know why but there is an if other i want to ask how to check with arg about how many items the player has like so the other text is in bulgarian don t mind it agreement i have read the guidelines above and confirm i am following them with this report
0
173,969
6,534,951,066
IssuesEvent
2017-08-31 12:59:10
spring-projects/spring-boot
https://api.github.com/repos/spring-projects/spring-boot
closed
Allow an operation on an endpoint to specify the media type that it produces
priority: normal type: enhancement
The Prometheus endpoint that's proposed in #9970 needs to provide an operation that returns `text/plain; version=0.0.4; charset=utf-8`. There's no way to do so with the current web endpoint infrastructure.
1.0
Allow an operation on an endpoint to specify the media type that it produces - The Prometheus endpoint that's proposed in #9970 needs to provide an operation that returns `text/plain; version=0.0.4; charset=utf-8`. There's no way to do so with the current web endpoint infrastructure.
non_main
allow an operation on an endpoint to specify the media type that it produces the prometheus endpoint that s proposed in needs to provide an operation that returns text plain version charset utf there s no way to do so with the current web endpoint infrastructure
0
9,162
24,142,973,880
IssuesEvent
2022-09-21 16:11:39
Azure/azure-sdk
https://api.github.com/repos/Azure/azure-sdk
opened
Board Review: Azure Communication Services (SPOOL) Call Recording Status (Android & iOS)
architecture board-review
## Background Currently Azure Communication Calling SDKs already expose `bool isRecordingActive` property and a `onIsRecordingActiveChanged` event. This allows the client apps to e.g., show "Recording started..." banner to users. However, the bool property makes it impossible to distinguish between various actual states of the recording lifecycle: "NotStarted", "Started", "Paused", "Ended". This new API introduces a new enum for `RecordingState` and events for it. ## Contacts and Timeline * Responsible service team: ACS Call Automation Media * Main contacts: @chrwhit * Expected code complete date: TBD * Expected release date: TBD ## About the Service * Link to documentation introducing/describing the service: [Call Recording API Quickstart](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/call-recording-sample) * Link to the service REST APIs: [communicationservicescallingserver.json](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/communication/data-plane/CallingServer/preview/2021-08-30-preview/communicationservicescallingserver.json) ## About the client library * Name of the client library: Azure Communication Calling SDK * Languages for this review: Android & iOS ### Android * APIView Link: https://apiview.dev/Assemblies/Review/c9fca0a4fc2b46db807583eb74c0a760/0b033b56f29c4dc79d1716fba9707b30?diffRevisionId=0cdebffdfb7542929b29fc86fe8a17f9&doc=False&diffOnly=True * Link to Champion Scenarios/Quickstart samples: TBD ### iOS * APIView Link: https://apiview.dev/Assemblies/Review/bbe0cc58c12c406997a52650be4975fa/23c5dce4980447428f49f50c169575c4?diffRevisionId=0961b8c8b66447b1bc2d77c62741d8d1&doc=False&diffOnly=True * Link to Champion Scenarios/Quickstart samples: TBD
1.0
Board Review: Azure Communication Services (SPOOL) Call Recording Status (Android & iOS) - ## Background Currently Azure Communication Calling SDKs already expose `bool isRecordingActive` property and a `onIsRecordingActiveChanged` event. This allows the client apps to e.g., show "Recording started..." banner to users. However, the bool property makes it impossible to distinguish between various actual states of the recording lifecycle: "NotStarted", "Started", "Paused", "Ended". This new API introduces a new enum for `RecordingState` and events for it. ## Contacts and Timeline * Responsible service team: ACS Call Automation Media * Main contacts: @chrwhit * Expected code complete date: TBD * Expected release date: TBD ## About the Service * Link to documentation introducing/describing the service: [Call Recording API Quickstart](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/call-recording-sample) * Link to the service REST APIs: [communicationservicescallingserver.json](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/communication/data-plane/CallingServer/preview/2021-08-30-preview/communicationservicescallingserver.json) ## About the client library * Name of the client library: Azure Communication Calling SDK * Languages for this review: Android & iOS ### Android * APIView Link: https://apiview.dev/Assemblies/Review/c9fca0a4fc2b46db807583eb74c0a760/0b033b56f29c4dc79d1716fba9707b30?diffRevisionId=0cdebffdfb7542929b29fc86fe8a17f9&doc=False&diffOnly=True * Link to Champion Scenarios/Quickstart samples: TBD ### iOS * APIView Link: https://apiview.dev/Assemblies/Review/bbe0cc58c12c406997a52650be4975fa/23c5dce4980447428f49f50c169575c4?diffRevisionId=0961b8c8b66447b1bc2d77c62741d8d1&doc=False&diffOnly=True * Link to Champion Scenarios/Quickstart samples: TBD
non_main
board review azure communication services spool call recording status android ios background currently azure communication calling sdks already expose bool isrecordingactive property and a onisrecordingactivechanged event this allows the client apps to e g show recording started banner to users however the bool property makes it impossible to distinguish between various actual states of the recording lifecycle notstarted started paused ended this new api introduces a new enum for recordingstate and events for it contacts and timeline responsible service team acs call automation media main contacts chrwhit expected code complete date tbd expected release date tbd about the service link to documentation introducing describing the service link to the service rest apis about the client library name of the client library azure communication calling sdk languages for this review android ios android apiview link link to champion scenarios quickstart samples tbd ios apiview link link to champion scenarios quickstart samples tbd
0
4,332
21,781,531,909
IssuesEvent
2022-05-13 19:33:13
tethysplatform/tethys
https://api.github.com/repos/tethysplatform/tethys
closed
Review How JavaScript Dependencies are Handled
maintain dependencies
Move away from including third party javascript source code in the Tethys repository Options: * Use CDNs for major dependencies (JQuery, Twitter, Bootstrap)? * Use npm/yarn or other JS package manager CDNs to Consider: * jsdelivr * cloudflare * Use both in failover mode?
True
Review How JavaScript Dependencies are Handled - Move away from including third party javascript source code in the Tethys repository Options: * Use CDNs for major dependencies (JQuery, Twitter, Bootstrap)? * Use npm/yarn or other JS package manager CDNs to Consider: * jsdelivr * cloudflare * Use both in failover mode?
main
review how javascript dependencies are handled move away from including third party javascript source code in the tethys repository options use cdns for major dependencies jquery twitter bootstrap use npm yarn or other js package manager cdns to consider jsdelivr cloudflare use both in failover mode
1
4,354
22,033,910,908
IssuesEvent
2022-05-28 09:00:48
bromite/bromite
https://api.github.com/repos/bromite/bromite
closed
External Download Manager
enhancement enhancement-without-maintainer
### Preliminary checklist - [X] I have read the [README](https://github.com/bromite/bromite/blob/master/README.md) - [X] I have read the [FAQs](https://github.com/bromite/bromite/blob/master/FAQ.md). - [X] I have searched [existing issues](https://github.com/bromite/bromite/issues) for my feature request. This is a new issue (NOT a duplicate) and is not related to another issue. ### Is your feature request related to privacy? Yes ### Is there a patch available for this feature somewhere? I don't know about such patch ### Describe the solution you would like Please Add an option for External Download Files. As sometimes the link expires for large size files then we have to re-download the file from starting. After Adding this feature we can resume the file by adding new link from downloading agent. ### Describe alternatives you have considered I've tried Firefox Repo Fennec.It contains this feature.
True
External Download Manager - ### Preliminary checklist - [X] I have read the [README](https://github.com/bromite/bromite/blob/master/README.md) - [X] I have read the [FAQs](https://github.com/bromite/bromite/blob/master/FAQ.md). - [X] I have searched [existing issues](https://github.com/bromite/bromite/issues) for my feature request. This is a new issue (NOT a duplicate) and is not related to another issue. ### Is your feature request related to privacy? Yes ### Is there a patch available for this feature somewhere? I don't know about such patch ### Describe the solution you would like Please Add an option for External Download Files. As sometimes the link expires for large size files then we have to re-download the file from starting. After Adding this feature we can resume the file by adding new link from downloading agent. ### Describe alternatives you have considered I've tried Firefox Repo Fennec.It contains this feature.
main
external download manager preliminary checklist i have read the i have read the i have searched for my feature request this is a new issue not a duplicate and is not related to another issue is your feature request related to privacy yes is there a patch available for this feature somewhere i don t know about such patch describe the solution you would like please add an option for external download files as sometimes the link expires for large size files then we have to re download the file from starting after adding this feature we can resume the file by adding new link from downloading agent describe alternatives you have considered i ve tried firefox repo fennec it contains this feature
1
1,774
6,575,800,094
IssuesEvent
2017-09-11 17:22:29
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
docker_container module - "Error connecting container to network"
affects_2.1 bug_report cloud docker waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /Users/ret/Projects/servers/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] inventory = ./inventory% ``` ##### OS / ENVIRONMENT OSX 10.10.5 (Yosemite) and OSX Sierra ##### SUMMARY When trying to create docker containers from an ansible playbook and adding them to a docker network previously created from another playbook to give this container a static IP address I'm getting an error saying: "Error connecting container to network privnet - connect_container_to_network() got an unexpected keyword argument 'ipv4_address'" ##### STEPS TO REPRODUCE Create network playbook: ``` - name: Create private network command: docker network create --subnet=192.168.100.0/24 --ip-range=192.168.100.0/24 --gateway=192.168.100.1 -o parent=eth0 privnet when: privnet is defined and dockernets.stdout.find(privnet.name) == -1 ``` Create docker container playbook: ``` --- - file: path=/shared/config/plex state=directory mode=0755 owner=797 recurse=true - name: plex in docker docker_container: name: "plex" hostname: "box1plex" image: timhaak/plex state: started restart_policy: always pull: true networks: - name: privnet ipv4_address: 192.168.100.10 purge_networks: yes log_driver: syslog log_opt: tag: "plex" volumes: - /shared/config/plex:/config - /shared/plex:/data ``` ##### EXPECTED RESULTS I expect a successful container creation instead of an error. ##### ACTUAL RESULTS ``` TASK [plex : plex in docker] *************************************************** task path: /Users/ret/Projects/servers/roles/plex/tasks/main.yml:3 <box.mydomain.com> ESTABLISH SSH CONNECTION FOR USER: ret <box.mydomain.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r box.mydomain.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602 `" && echo ansible-tmp-1475741968.88-156551952368602="` echo $HOME/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602 `" ) && sleep 0'"'"'' <box.mydomain.com> PUT /var/folders/7t/0myxzv9j64z0y6r2vl3wwg6m0000gn/T/tmpWiTHhW TO /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container <box.mydomain.com> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r '[box.mydomain.com]' <box.mydomain.com> ESTABLISH SSH CONNECTION FOR USER: ret <box.mydomain.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r box.mydomain.com '/bin/sh -c '"'"'chmod u+x /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/ /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container && sleep 0'"'"'' <box.mydomain.com> ESTABLISH SSH CONNECTION FOR USER: ret <box.mydomain.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r -tt box.mydomain.com '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-prfzskmhsxhsivtoucrzsgcloglldjbv; LANG=es_ES.UTF-8 LC_ALL=es_ES.UTF-8 LC_MESSAGES=es_ES.UTF-8 /usr/bin/python /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container; rm -rf "/home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' fatal: [box.mydomain.com]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"api_version": null, "blkio_weight": null, "cacert_path": null, "capabilities": null, "cert_path": null, "command": null, "cpu_period": null, "cpu_quota": null, "cpu_shares": null, "cpuset_cpus": null, "cpuset_mems": null, "debug": false, "detach": true, "devices": null, "dns_opts": null, "dns_search_domains": null, "dns_servers": null, "docker_host": null, "entrypoint": null, "env": null, "env_file": null, "etc_hosts": null, "exposed_ports": null, "filter_logger": false, "force_kill": false, "groups": null, "hostname": "box1plex", "image": "timhaak/plex", "interactive": false, "ipc_mode": null, "keep_volumes": true, "kernel_memory": null, "key_path": null, "kill_signal": null, "labels": null, "links": null, "log_driver": "syslog", "log_opt": {"tag": "plex"}, "log_options": {"tag": "plex"}, "mac_address": null, "memory": "0", "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "name": "plex", "network_mode": null, "networks": [{"id": "ab1a4406681e5fef4eef6409c6819615912b4b3c6ac5e6d0161b744a96d981d1", "ipv4_address": "192.168.100.10", "name": "privnet"}], "oom_killer": null, "paused": false, "pid_mode": null, "privileged": false, "published_ports": null, "pull": true, "purge_networks": true, "read_only": false, "recreate": false, "restart": false, "restart_policy": "always", "restart_retries": 0, "security_opts": null, "shm_size": null, "ssl_version": null, "state": "started", "stop_signal": null, "stop_timeout": null, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null, "trust_image_content": false, "tty": false, "ulimits": null, "user": null, "uts": null, "volume_driver": null, "volumes": ["/shared/config/plex:/config", "/shared/plex:/data"], "volumes_from": null}, "module_name": "docker_container"}, "msg": "Error connecting container to network privnet - connect_container_to_network() got an unexpected keyword argument 'ipv4_address'"} ``` Cheers, R.
True
docker_container module - "Error connecting container to network" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME docker_container ##### ANSIBLE VERSION ``` ansible 2.1.2.0 config file = /Users/ret/Projects/servers/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ``` [defaults] inventory = ./inventory% ``` ##### OS / ENVIRONMENT OSX 10.10.5 (Yosemite) and OSX Sierra ##### SUMMARY When trying to create docker containers from an ansible playbook and adding them to a docker network previously created from another playbook to give this container a static IP address I'm getting an error saying: "Error connecting container to network privnet - connect_container_to_network() got an unexpected keyword argument 'ipv4_address'" ##### STEPS TO REPRODUCE Create network playbook: ``` - name: Create private network command: docker network create --subnet=192.168.100.0/24 --ip-range=192.168.100.0/24 --gateway=192.168.100.1 -o parent=eth0 privnet when: privnet is defined and dockernets.stdout.find(privnet.name) == -1 ``` Create docker container playbook: ``` --- - file: path=/shared/config/plex state=directory mode=0755 owner=797 recurse=true - name: plex in docker docker_container: name: "plex" hostname: "box1plex" image: timhaak/plex state: started restart_policy: always pull: true networks: - name: privnet ipv4_address: 192.168.100.10 purge_networks: yes log_driver: syslog log_opt: tag: "plex" volumes: - /shared/config/plex:/config - /shared/plex:/data ``` ##### EXPECTED RESULTS I expect a successful container creation instead of an error. ##### ACTUAL RESULTS ``` TASK [plex : plex in docker] *************************************************** task path: /Users/ret/Projects/servers/roles/plex/tasks/main.yml:3 <box.mydomain.com> ESTABLISH SSH CONNECTION FOR USER: ret <box.mydomain.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r box.mydomain.com '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602 `" && echo ansible-tmp-1475741968.88-156551952368602="` echo $HOME/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602 `" ) && sleep 0'"'"'' <box.mydomain.com> PUT /var/folders/7t/0myxzv9j64z0y6r2vl3wwg6m0000gn/T/tmpWiTHhW TO /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container <box.mydomain.com> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r '[box.mydomain.com]' <box.mydomain.com> ESTABLISH SSH CONNECTION FOR USER: ret <box.mydomain.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r box.mydomain.com '/bin/sh -c '"'"'chmod u+x /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/ /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container && sleep 0'"'"'' <box.mydomain.com> ESTABLISH SSH CONNECTION FOR USER: ret <box.mydomain.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ret -o ConnectTimeout=10 -o ControlPath=/Users/ret/.ansible/cp/ansible-ssh-%h-%p-%r -tt box.mydomain.com '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-prfzskmhsxhsivtoucrzsgcloglldjbv; LANG=es_ES.UTF-8 LC_ALL=es_ES.UTF-8 LC_MESSAGES=es_ES.UTF-8 /usr/bin/python /home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/docker_container; rm -rf "/home/ret/.ansible/tmp/ansible-tmp-1475741968.88-156551952368602/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"'' fatal: [box.mydomain.com]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"api_version": null, "blkio_weight": null, "cacert_path": null, "capabilities": null, "cert_path": null, "command": null, "cpu_period": null, "cpu_quota": null, "cpu_shares": null, "cpuset_cpus": null, "cpuset_mems": null, "debug": false, "detach": true, "devices": null, "dns_opts": null, "dns_search_domains": null, "dns_servers": null, "docker_host": null, "entrypoint": null, "env": null, "env_file": null, "etc_hosts": null, "exposed_ports": null, "filter_logger": false, "force_kill": false, "groups": null, "hostname": "box1plex", "image": "timhaak/plex", "interactive": false, "ipc_mode": null, "keep_volumes": true, "kernel_memory": null, "key_path": null, "kill_signal": null, "labels": null, "links": null, "log_driver": "syslog", "log_opt": {"tag": "plex"}, "log_options": {"tag": "plex"}, "mac_address": null, "memory": "0", "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "name": "plex", "network_mode": null, "networks": [{"id": "ab1a4406681e5fef4eef6409c6819615912b4b3c6ac5e6d0161b744a96d981d1", "ipv4_address": "192.168.100.10", "name": "privnet"}], "oom_killer": null, "paused": false, "pid_mode": null, "privileged": false, "published_ports": null, "pull": true, "purge_networks": true, "read_only": false, "recreate": false, "restart": false, "restart_policy": "always", "restart_retries": 0, "security_opts": null, "shm_size": null, "ssl_version": null, "state": "started", "stop_signal": null, "stop_timeout": null, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null, "trust_image_content": false, "tty": false, "ulimits": null, "user": null, "uts": null, "volume_driver": null, "volumes": ["/shared/config/plex:/config", "/shared/plex:/data"], "volumes_from": null}, "module_name": "docker_container"}, "msg": "Error connecting container to network privnet - connect_container_to_network() got an unexpected keyword argument 'ipv4_address'"} ``` Cheers, R.
main
docker container module error connecting container to network issue type bug report component name docker container ansible version ansible config file users ret projects servers ansible cfg configured module search path default w o overrides configuration inventory inventory os environment osx yosemite and osx sierra summary when trying to create docker containers from an ansible playbook and adding them to a docker network previously created from another playbook to give this container a static ip address i m getting an error saying error connecting container to network privnet connect container to network got an unexpected keyword argument address steps to reproduce create network playbook name create private network command docker network create subnet ip range gateway o parent privnet when privnet is defined and dockernets stdout find privnet name create docker container playbook file path shared config plex state directory mode owner recurse true name plex in docker docker container name plex hostname image timhaak plex state started restart policy always pull true networks name privnet address purge networks yes log driver syslog log opt tag plex volumes shared config plex config shared plex data expected results i expect a successful container creation instead of an error actual results task task path users ret projects servers roles plex tasks main yml establish ssh connection for user ret ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ret o connecttimeout o controlpath users ret ansible cp ansible ssh h p r box mydomain com bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders t tmpwithhw to home ret ansible tmp ansible tmp docker container ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ret o connecttimeout o controlpath users ret ansible cp ansible ssh h p r establish ssh connection for user ret ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ret o connecttimeout o controlpath users ret ansible cp ansible ssh h p r box mydomain com bin sh c chmod u x home ret ansible tmp ansible tmp home ret ansible tmp ansible tmp docker container sleep establish ssh connection for user ret ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ret o connecttimeout o controlpath users ret ansible cp ansible ssh h p r tt box mydomain com bin sh c sudo h s n u root bin sh c echo become success prfzskmhsxhsivtoucrzsgcloglldjbv lang es es utf lc all es es utf lc messages es es utf usr bin python home ret ansible tmp ansible tmp docker container rm rf home ret ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args api version null blkio weight null cacert path null capabilities null cert path null command null cpu period null cpu quota null cpu shares null cpuset cpus null cpuset mems null debug false detach true devices null dns opts null dns search domains null dns servers null docker host null entrypoint null env null env file null etc hosts null exposed ports null filter logger false force kill false groups null hostname image timhaak plex interactive false ipc mode null keep volumes true kernel memory null key path null kill signal null labels null links null log driver syslog log opt tag plex log options tag plex mac address null memory memory reservation null memory swap null memory swappiness null name plex network mode null networks oom killer null paused false pid mode null privileged false published ports null pull true purge networks true read only false recreate false restart false restart policy always restart retries security opts null shm size null ssl version null state started stop signal null stop timeout null timeout null tls null tls hostname null tls verify null trust image content false tty false ulimits null user null uts null volume driver null volumes volumes from null module name docker container msg error connecting container to network privnet connect container to network got an unexpected keyword argument address cheers r
1
200,606
15,114,478,279
IssuesEvent
2021-02-09 02:01:27
GlobantUy/STB-Bank
https://api.github.com/repos/GlobantUy/STB-Bank
opened
[Botรณn Cancelar] Cuando se muestra el modal de confirmaciรณn de prรฉstamo y se cliquea 'Cancelar' se cancela la solicitud
TestCase
**Precondiciones:** Se debe contar con un usuario vรกlido para solicitar un prรฉstamo ======================================================= Pasos para la ejecuciรณn | Resultado Esperado ------------ | ------------- 1: Acceder al simulador de prรฉstamos| 2: En el formulario, ingresar datos vรกlidos| 3: Cliquear 'Simular prรฉstamo'| 4: En la pantalla "Resultado del prรฉstamo, cliquear 'Solicitar prรฉstamo'| Se despliega un popup con las opciones 'Cancelar' y 'Solicitar' 5: Cliquear 'Cancelar'| El usuario permanece en la pantalla 'Resultado del prรฉstamo' ======================================================= **US asociada:** #109
1.0
[Botรณn Cancelar] Cuando se muestra el modal de confirmaciรณn de prรฉstamo y se cliquea 'Cancelar' se cancela la solicitud - **Precondiciones:** Se debe contar con un usuario vรกlido para solicitar un prรฉstamo ======================================================= Pasos para la ejecuciรณn | Resultado Esperado ------------ | ------------- 1: Acceder al simulador de prรฉstamos| 2: En el formulario, ingresar datos vรกlidos| 3: Cliquear 'Simular prรฉstamo'| 4: En la pantalla "Resultado del prรฉstamo, cliquear 'Solicitar prรฉstamo'| Se despliega un popup con las opciones 'Cancelar' y 'Solicitar' 5: Cliquear 'Cancelar'| El usuario permanece en la pantalla 'Resultado del prรฉstamo' ======================================================= **US asociada:** #109
non_main
cuando se muestra el modal de confirmaciรณn de prรฉstamo y se cliquea cancelar se cancela la solicitud precondiciones se debe contar con un usuario vรกlido para solicitar un prรฉstamo pasos para la ejecuciรณn resultado esperado acceder al simulador de prรฉstamos en el formulario ingresar datos vรกlidos cliquear simular prรฉstamo en la pantalla resultado del prรฉstamo cliquear solicitar prรฉstamo se despliega un popup con las opciones cancelar y solicitar cliquear cancelar el usuario permanece en la pantalla resultado del prรฉstamo us asociada
0
968
4,708,183,352
IssuesEvent
2016-10-13 22:31:17
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Add support for win_regedit REG_NONE type
affects_2.2 feature_idea waiting_on_maintainer windows
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report / Missing feature ##### COMPONENT NAME win_regedit ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` Built today from Git devel. ##### CONFIGURATION Stock ##### OS / ENVIRONMENT Host: CentOS 7 Target: Windows 10 (Powershell 5.0) ##### SUMMARY I am unable to find a way to create a registry key with the type of "[REG_NONE](https://msdn.microsoft.com/en-us/library/windows/desktop/ms724884(v=vs.85).aspx)". Neither creating an empty string or binary data type results in same thing as a key type of none. While regedit.exe does not have a UI to create keys of type "REG_NONE", they are creatable with a .reg file using the following syntax: ``` Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Example] "ExampleNoneTypeKey"=hex(0): ``` As seen in regedit.exe: ![regedit.exe screenshot](https://cloud.githubusercontent.com/assets/43646/19133302/793c4cf4-8b4f-11e6-9bc4-bdd1cdd7d99e.png) The none type isn't used in many places in the registry but is used extensively to setup file associations which would be useful to be able to control with win_regedit.
True
Add support for win_regedit REG_NONE type - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report / Missing feature ##### COMPONENT NAME win_regedit ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` ansible 2.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` Built today from Git devel. ##### CONFIGURATION Stock ##### OS / ENVIRONMENT Host: CentOS 7 Target: Windows 10 (Powershell 5.0) ##### SUMMARY I am unable to find a way to create a registry key with the type of "[REG_NONE](https://msdn.microsoft.com/en-us/library/windows/desktop/ms724884(v=vs.85).aspx)". Neither creating an empty string or binary data type results in same thing as a key type of none. While regedit.exe does not have a UI to create keys of type "REG_NONE", they are creatable with a .reg file using the following syntax: ``` Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Example] "ExampleNoneTypeKey"=hex(0): ``` As seen in regedit.exe: ![regedit.exe screenshot](https://cloud.githubusercontent.com/assets/43646/19133302/793c4cf4-8b4f-11e6-9bc4-bdd1cdd7d99e.png) The none type isn't used in many places in the registry but is used extensively to setup file associations which would be useful to be able to control with win_regedit.
main
add support for win regedit reg none type issue type bug report missing feature component name win regedit ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides built today from git devel configuration stock os environment host centos target windows powershell summary i am unable to find a way to create a registry key with the type of neither creating an empty string or binary data type results in same thing as a key type of none while regedit exe does not have a ui to create keys of type reg none they are creatable with a reg file using the following syntax windows registry editor version examplenonetypekey hex as seen in regedit exe the none type isn t used in many places in the registry but is used extensively to setup file associations which would be useful to be able to control with win regedit
1
3,217
12,300,623,840
IssuesEvent
2020-05-11 14:15:28
short-d/short
https://api.github.com/repos/short-d/short
closed
[Refactor] Move detailed configurations into separate files
maintainability
**What is frustrating you?** There is too much content in the README. Most of them is not related to the first time environment setup. This is driving away developers who are interested in trying out Short on their local machine. **Your solution** Have only one type of sign in setup in `Getting Started` section. Move the individual setups to new markdown files.
True
[Refactor] Move detailed configurations into separate files - **What is frustrating you?** There is too much content in the README. Most of them is not related to the first time environment setup. This is driving away developers who are interested in trying out Short on their local machine. **Your solution** Have only one type of sign in setup in `Getting Started` section. Move the individual setups to new markdown files.
main
move detailed configurations into separate files what is frustrating you there is too much content in the readme most of them is not related to the first time environment setup this is driving away developers who are interested in trying out short on their local machine your solution have only one type of sign in setup in getting started section move the individual setups to new markdown files
1
758,824
26,570,106,689
IssuesEvent
2023-01-21 02:57:59
pdx-blurp/blurp-frontend
https://api.github.com/repos/pdx-blurp/blurp-frontend
opened
Map Page: System Tool Bar
new feature medium priority
Create a simple 1-column, thin, left-hand sidebar for system tools + icons: *it will start out with these icons: - 3 dot icon - cog wheel icon - save floppy-disk icon *behaviors: - when 3 dot icon is clicked the sidebar expands open - when User clicks away, the sidebar collapses
1.0
Map Page: System Tool Bar - Create a simple 1-column, thin, left-hand sidebar for system tools + icons: *it will start out with these icons: - 3 dot icon - cog wheel icon - save floppy-disk icon *behaviors: - when 3 dot icon is clicked the sidebar expands open - when User clicks away, the sidebar collapses
non_main
map page system tool bar create a simple column thin left hand sidebar for system tools icons it will start out with these icons dot icon cog wheel icon save floppy disk icon behaviors when dot icon is clicked the sidebar expands open when user clicks away the sidebar collapses
0
45,435
12,799,854,314
IssuesEvent
2020-07-02 16:02:57
snowplow/snowplow-android-tracker
https://api.github.com/repos/snowplow/snowplow-android-tracker
closed
Fix importing of kotlin on gradle
priority:medium status:completed type:defect
This project is written 100% in Java, however the SDK ships with a dependency on [the Kotlin stdlib](https://github.com/snowplow/snowplow-android-tracker/blob/master/snowplow-tracker/build.gradle#L85) and [Kotlin Android extensions](https://github.com/snowplow/snowplow-android-tracker/blob/master/snowplow-tracker/build.gradle#L7). Kotlin was added in [this PR](https://github.com/snowplow/snowplow-android-tracker/pull/358), but seems unrelated? Also as an aside it would be great if this library added nullability annotations to make Kotlin interoperability nicer! I can open up a separate issue for this if you'd prefer.
1.0
Fix importing of kotlin on gradle - This project is written 100% in Java, however the SDK ships with a dependency on [the Kotlin stdlib](https://github.com/snowplow/snowplow-android-tracker/blob/master/snowplow-tracker/build.gradle#L85) and [Kotlin Android extensions](https://github.com/snowplow/snowplow-android-tracker/blob/master/snowplow-tracker/build.gradle#L7). Kotlin was added in [this PR](https://github.com/snowplow/snowplow-android-tracker/pull/358), but seems unrelated? Also as an aside it would be great if this library added nullability annotations to make Kotlin interoperability nicer! I can open up a separate issue for this if you'd prefer.
non_main
fix importing of kotlin on gradle this project is written in java however the sdk ships with a dependency on and kotlin was added in but seems unrelated also as an aside it would be great if this library added nullability annotations to make kotlin interoperability nicer i can open up a separate issue for this if you d prefer
0
56,601
15,210,332,957
IssuesEvent
2021-02-17 07:15:31
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
JOOQ parseResultQuery failing to parse on SQL query containing date_add
T: Defect
### Expected behavior On passing a MYSQL query with a **date_add**, it is supposed to execute as expected. ### Actual behavior It's throwing a Parser Exception ### Steps to reproduce the problem `String sqlQuery = " select users.id, users.created, ifnull(date_add(users.created, interval 2 hour), '0000-00-00 00:00:00') as TEST from users"` `ResultQuery jooqQuery = DSL.using(dslContext.configuration()).parser().parseResultQuery(sqlQuery);` **Result**: `org.jooq.impl.ParserException: Unknown function: [1:184] ...ers.created as User Created, ifnull(date_add([*]users.created, interval 2 hour), '0000-00-00 00:00:00') as TEST from user... at org.jooq.impl.ParserContext.exception(ParserImpl.java:11047) at org.jooq.impl.ParserImpl.parseUnaryOps(ParserImpl.java:5736) at org.jooq.impl.ParserImpl.parseExp(ParserImpl.java:5703) at org.jooq.impl.ParserImpl.parseFactor(ParserImpl.java:5680) at org.jooq.impl.ParserImpl.parseSum(ParserImpl.java:5632) at org.jooq.impl.ParserImpl.parseNumericOp(ParserImpl.java:5617) at org.jooq.impl.ParserImpl.parseCollated(ParserImpl.java:5598) at org.jooq.impl.ParserImpl.parseConcat(ParserImpl.java:5588) at org.jooq.impl.ParserImpl.parsePredicate(ParserImpl.java:4618) at org.jooq.impl.ParserImpl.parseNot(ParserImpl.java:4584) at org.jooq.impl.ParserImpl.parseAnd(ParserImpl.java:4574) at org.jooq.impl.ParserImpl.parseOr(ParserImpl.java:4565) at org.jooq.impl.ParserImpl.parseField(ParserImpl.java:5505) at org.jooq.impl.ParserImpl.parseField(ParserImpl.java:5428) at org.jooq.impl.ParserImpl.parseFieldIfnullIf(ParserImpl.java:8198) at org.jooq.impl.ParserImpl.parseTerm(ParserImpl.java:6008) at org.jooq.impl.ParserImpl.parseUnaryOps(ParserImpl.java:5723) at org.jooq.impl.ParserImpl.parseExp(ParserImpl.java:5703) at org.jooq.impl.ParserImpl.parseFactor(ParserImpl.java:5680) at org.jooq.impl.ParserImpl.parseSum(ParserImpl.java:5632) at org.jooq.impl.ParserImpl.parseNumericOp(ParserImpl.java:5617) at org.jooq.impl.ParserImpl.parseCollated(ParserImpl.java:5598) at org.jooq.impl.ParserImpl.parseConcat(ParserImpl.java:5588) at org.jooq.impl.ParserImpl.parsePredicate(ParserImpl.java:4618) at org.jooq.impl.ParserImpl.parseNot(ParserImpl.java:4584) at org.jooq.impl.ParserImpl.parseAnd(ParserImpl.java:4574) at org.jooq.impl.ParserImpl.parseOr(ParserImpl.java:4565) at org.jooq.impl.ParserImpl.parseField(ParserImpl.java:5505) at org.jooq.impl.ParserImpl.parseField(ParserImpl.java:5428) at org.jooq.impl.ParserImpl.parseSelectList(ParserImpl.java:5351) at org.jooq.impl.ParserImpl.parseQueryPrimary(ParserImpl.java:1288) at org.jooq.impl.ParserImpl.parseQueryTerm(ParserImpl.java:1211) at org.jooq.impl.ParserImpl.parseQueryExpressionBody(ParserImpl.java:1182) at org.jooq.impl.ParserImpl.parseSelect(ParserImpl.java:1050) at org.jooq.impl.ParserImpl.parseSelect(ParserImpl.java:1042) at org.jooq.impl.ParserImpl.parseQuery(ParserImpl.java:919) at org.jooq.impl.ParserImpl.parseResultQuery(ParserImpl.java:660) at org.jooq.impl.ParserImpl.parseResultQuery(ParserImpl.java:654)` ### Versions - jOOQ: 3.13 - Java: 11 - Database (include vendor): MYSQL 5.7 - OS: Ubuntu 18.04 - JDBC Driver: com.mysql.jdbc.Driver
1.0
JOOQ parseResultQuery failing to parse on SQL query containing date_add - ### Expected behavior On passing a MYSQL query with a **date_add**, it is supposed to execute as expected. ### Actual behavior It's throwing a Parser Exception ### Steps to reproduce the problem `String sqlQuery = " select users.id, users.created, ifnull(date_add(users.created, interval 2 hour), '0000-00-00 00:00:00') as TEST from users"` `ResultQuery jooqQuery = DSL.using(dslContext.configuration()).parser().parseResultQuery(sqlQuery);` **Result**: `org.jooq.impl.ParserException: Unknown function: [1:184] ...ers.created as User Created, ifnull(date_add([*]users.created, interval 2 hour), '0000-00-00 00:00:00') as TEST from user... at org.jooq.impl.ParserContext.exception(ParserImpl.java:11047) at org.jooq.impl.ParserImpl.parseUnaryOps(ParserImpl.java:5736) at org.jooq.impl.ParserImpl.parseExp(ParserImpl.java:5703) at org.jooq.impl.ParserImpl.parseFactor(ParserImpl.java:5680) at org.jooq.impl.ParserImpl.parseSum(ParserImpl.java:5632) at org.jooq.impl.ParserImpl.parseNumericOp(ParserImpl.java:5617) at org.jooq.impl.ParserImpl.parseCollated(ParserImpl.java:5598) at org.jooq.impl.ParserImpl.parseConcat(ParserImpl.java:5588) at org.jooq.impl.ParserImpl.parsePredicate(ParserImpl.java:4618) at org.jooq.impl.ParserImpl.parseNot(ParserImpl.java:4584) at org.jooq.impl.ParserImpl.parseAnd(ParserImpl.java:4574) at org.jooq.impl.ParserImpl.parseOr(ParserImpl.java:4565) at org.jooq.impl.ParserImpl.parseField(ParserImpl.java:5505) at org.jooq.impl.ParserImpl.parseField(ParserImpl.java:5428) at org.jooq.impl.ParserImpl.parseFieldIfnullIf(ParserImpl.java:8198) at org.jooq.impl.ParserImpl.parseTerm(ParserImpl.java:6008) at org.jooq.impl.ParserImpl.parseUnaryOps(ParserImpl.java:5723) at org.jooq.impl.ParserImpl.parseExp(ParserImpl.java:5703) at org.jooq.impl.ParserImpl.parseFactor(ParserImpl.java:5680) at org.jooq.impl.ParserImpl.parseSum(ParserImpl.java:5632) at org.jooq.impl.ParserImpl.parseNumericOp(ParserImpl.java:5617) at org.jooq.impl.ParserImpl.parseCollated(ParserImpl.java:5598) at org.jooq.impl.ParserImpl.parseConcat(ParserImpl.java:5588) at org.jooq.impl.ParserImpl.parsePredicate(ParserImpl.java:4618) at org.jooq.impl.ParserImpl.parseNot(ParserImpl.java:4584) at org.jooq.impl.ParserImpl.parseAnd(ParserImpl.java:4574) at org.jooq.impl.ParserImpl.parseOr(ParserImpl.java:4565) at org.jooq.impl.ParserImpl.parseField(ParserImpl.java:5505) at org.jooq.impl.ParserImpl.parseField(ParserImpl.java:5428) at org.jooq.impl.ParserImpl.parseSelectList(ParserImpl.java:5351) at org.jooq.impl.ParserImpl.parseQueryPrimary(ParserImpl.java:1288) at org.jooq.impl.ParserImpl.parseQueryTerm(ParserImpl.java:1211) at org.jooq.impl.ParserImpl.parseQueryExpressionBody(ParserImpl.java:1182) at org.jooq.impl.ParserImpl.parseSelect(ParserImpl.java:1050) at org.jooq.impl.ParserImpl.parseSelect(ParserImpl.java:1042) at org.jooq.impl.ParserImpl.parseQuery(ParserImpl.java:919) at org.jooq.impl.ParserImpl.parseResultQuery(ParserImpl.java:660) at org.jooq.impl.ParserImpl.parseResultQuery(ParserImpl.java:654)` ### Versions - jOOQ: 3.13 - Java: 11 - Database (include vendor): MYSQL 5.7 - OS: Ubuntu 18.04 - JDBC Driver: com.mysql.jdbc.Driver
non_main
jooq parseresultquery failing to parse on sql query containing date add expected behavior on passing a mysql query with a date add it is supposed to execute as expected actual behavior it s throwing a parser exception steps to reproduce the problem string sqlquery select users id users created ifnull date add users created interval hour as test from users resultquery jooqquery dsl using dslcontext configuration parser parseresultquery sqlquery result org jooq impl parserexception unknown function ers created as user created ifnull date add users created interval hour as test from user at org jooq impl parsercontext exception parserimpl java at org jooq impl parserimpl parseunaryops parserimpl java at org jooq impl parserimpl parseexp parserimpl java at org jooq impl parserimpl parsefactor parserimpl java at org jooq impl parserimpl parsesum parserimpl java at org jooq impl parserimpl parsenumericop parserimpl java at org jooq impl parserimpl parsecollated parserimpl java at org jooq impl parserimpl parseconcat parserimpl java at org jooq impl parserimpl parsepredicate parserimpl java at org jooq impl parserimpl parsenot parserimpl java at org jooq impl parserimpl parseand parserimpl java at org jooq impl parserimpl parseor parserimpl java at org jooq impl parserimpl parsefield parserimpl java at org jooq impl parserimpl parsefield parserimpl java at org jooq impl parserimpl parsefieldifnullif parserimpl java at org jooq impl parserimpl parseterm parserimpl java at org jooq impl parserimpl parseunaryops parserimpl java at org jooq impl parserimpl parseexp parserimpl java at org jooq impl parserimpl parsefactor parserimpl java at org jooq impl parserimpl parsesum parserimpl java at org jooq impl parserimpl parsenumericop parserimpl java at org jooq impl parserimpl parsecollated parserimpl java at org jooq impl parserimpl parseconcat parserimpl java at org jooq impl parserimpl parsepredicate parserimpl java at org jooq impl parserimpl parsenot parserimpl java at org jooq impl parserimpl parseand parserimpl java at org jooq impl parserimpl parseor parserimpl java at org jooq impl parserimpl parsefield parserimpl java at org jooq impl parserimpl parsefield parserimpl java at org jooq impl parserimpl parseselectlist parserimpl java at org jooq impl parserimpl parsequeryprimary parserimpl java at org jooq impl parserimpl parsequeryterm parserimpl java at org jooq impl parserimpl parsequeryexpressionbody parserimpl java at org jooq impl parserimpl parseselect parserimpl java at org jooq impl parserimpl parseselect parserimpl java at org jooq impl parserimpl parsequery parserimpl java at org jooq impl parserimpl parseresultquery parserimpl java at org jooq impl parserimpl parseresultquery parserimpl java versions jooq java database include vendor mysql os ubuntu jdbc driver com mysql jdbc driver
0
179,236
13,852,140,942
IssuesEvent
2020-10-15 05:53:24
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
ccl/backupccl: TestRestoreAsOfSystemTime failed
C-test-failure O-robot branch-master
[(ccl/backupccl).TestRestoreAsOfSystemTime failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2366127&tab=buildLog) on [master@80e7127197f76ef35c1f6ec3984c4d49d4afde7f](https://github.com/cockroachdb/cockroach/commits/80e7127197f76ef35c1f6ec3984c4d49d4afde7f): ``` === RUN TestRestoreAsOfSystemTime * * WARNING: disk slowness detected: unable to sync log files within 10s * ERROR: exit status 255 1 runs completed, 1 failures, over 36m32s context canceled ``` <details><summary>More</summary><p> Parameters: - TAGS= - GOFLAGS=-race -parallel=2 ``` make stressrace TESTS=TestRestoreAsOfSystemTime PKG=./pkg/ccl/backupccl TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1 ``` Related: - #55359 ccl/backupccl: TestRestoreAsOfSystemTime failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestRestoreAsOfSystemTime.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
1.0
ccl/backupccl: TestRestoreAsOfSystemTime failed - [(ccl/backupccl).TestRestoreAsOfSystemTime failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2366127&tab=buildLog) on [master@80e7127197f76ef35c1f6ec3984c4d49d4afde7f](https://github.com/cockroachdb/cockroach/commits/80e7127197f76ef35c1f6ec3984c4d49d4afde7f): ``` === RUN TestRestoreAsOfSystemTime * * WARNING: disk slowness detected: unable to sync log files within 10s * ERROR: exit status 255 1 runs completed, 1 failures, over 36m32s context canceled ``` <details><summary>More</summary><p> Parameters: - TAGS= - GOFLAGS=-race -parallel=2 ``` make stressrace TESTS=TestRestoreAsOfSystemTime PKG=./pkg/ccl/backupccl TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1 ``` Related: - #55359 ccl/backupccl: TestRestoreAsOfSystemTime failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestRestoreAsOfSystemTime.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
non_main
ccl backupccl testrestoreasofsystemtime failed on run testrestoreasofsystemtime warning disk slowness detected unable to sync log files within error exit status runs completed failures over context canceled more parameters tags goflags race parallel make stressrace tests testrestoreasofsystemtime pkg pkg ccl backupccl testtimeout stressflags timeout related ccl backupccl testrestoreasofsystemtime failed powered by
0
1,747
6,574,941,788
IssuesEvent
2017-09-11 14:33:52
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
vsphere_guest: index out of range exception while reconfiguring disk size
affects_2.1 bug_report cloud vmware waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_module_vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- --> Centos7 ##### SUMMARY got index out of range exception while configuring vm with vsphere_guest ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - hosts: localhost gather_facts: false connection: local roles: - vm_create ... --- # tasks file for vm_create - name: check for dependency python-pip yum: name="{{item}}" state=latest with_items: - python-pip - name: check for dependencies pip: name="{{item}}" state=latest with_items: - pysphere - pyvmomi - name: create vm from template vsphere_guest: vcenter_hostname: "{{vcenter_hostname}}" username: "{{ vcenter_user }}" password: "{{ vcenter_pass }}" guest: "test_01" from_template: yes template_src: "{{ vm_template }}" cluster: "{{ cluster }}" resource_pool: "{{ resource_pool }}" power_on_after_clone: "no" tags: - create - name: reconfigure vm vsphere_guest: vcenter_hostname: "{{ vcenter_hostname }}" username: "{{ vcenter_user }}" password: "{{ vcenter_pass }}" guest: "test_01" state: reconfigured vm_extra_config: notes: "created with ansible vsphere" vm_disk: disk1: size_gb: "{{ disk_main }}" type: thin datastore: "{{ datastore }}" disk2: size_gb: "{{ disk_var }}" type: thin datastore: "{{ datastore }}" disk3: size_gb: "{{ disk_opt }}" type: thin datastore: "{{ datastore }}" disk4: size_gb: "{{ disk_home }}" type: thin datastore: "{{ datastore }}" vm_nic: nic1: type: "vmxnet3" network: "VM Network" network_type: "standard" vm_hardware: memory_mb: "{{ memory }}" num_cpus: "{{ cpucount }}" osid: "{{ osid }}" scsi: paravirtual esxi: datacenter: "{{ datacenter }}" hostname: "{{ esxi_host }}" ... ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> normal playthrough with reconfigured disk-sizes ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> creating vm from template works fine, but reconfiguring fails with exception <!--- Paste verbatim command output between quotes below --> ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py", line 1879, in <module> main() File "/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py", line 1806, in main force=force File "/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py", line 842, in reconfigure_vm module, vm_disk, changes) File "/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py", line 773, in update_disks hdd_id = vm._devices[dev_key]['label'].split()[2] IndexError: list index out of range ```
True
vsphere_guest: index out of range exception while reconfiguring disk size - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ansible_module_vsphere_guest ##### ANSIBLE VERSION ``` ansible 2.1.2.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- --> Centos7 ##### SUMMARY got index out of range exception while configuring vm with vsphere_guest ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - hosts: localhost gather_facts: false connection: local roles: - vm_create ... --- # tasks file for vm_create - name: check for dependency python-pip yum: name="{{item}}" state=latest with_items: - python-pip - name: check for dependencies pip: name="{{item}}" state=latest with_items: - pysphere - pyvmomi - name: create vm from template vsphere_guest: vcenter_hostname: "{{vcenter_hostname}}" username: "{{ vcenter_user }}" password: "{{ vcenter_pass }}" guest: "test_01" from_template: yes template_src: "{{ vm_template }}" cluster: "{{ cluster }}" resource_pool: "{{ resource_pool }}" power_on_after_clone: "no" tags: - create - name: reconfigure vm vsphere_guest: vcenter_hostname: "{{ vcenter_hostname }}" username: "{{ vcenter_user }}" password: "{{ vcenter_pass }}" guest: "test_01" state: reconfigured vm_extra_config: notes: "created with ansible vsphere" vm_disk: disk1: size_gb: "{{ disk_main }}" type: thin datastore: "{{ datastore }}" disk2: size_gb: "{{ disk_var }}" type: thin datastore: "{{ datastore }}" disk3: size_gb: "{{ disk_opt }}" type: thin datastore: "{{ datastore }}" disk4: size_gb: "{{ disk_home }}" type: thin datastore: "{{ datastore }}" vm_nic: nic1: type: "vmxnet3" network: "VM Network" network_type: "standard" vm_hardware: memory_mb: "{{ memory }}" num_cpus: "{{ cpucount }}" osid: "{{ osid }}" scsi: paravirtual esxi: datacenter: "{{ datacenter }}" hostname: "{{ esxi_host }}" ... ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> normal playthrough with reconfigured disk-sizes ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> creating vm from template works fine, but reconfiguring fails with exception <!--- Paste verbatim command output between quotes below --> ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py", line 1879, in <module> main() File "/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py", line 1806, in main force=force File "/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py", line 842, in reconfigure_vm module, vm_disk, changes) File "/tmp/ansible_BSLEdg/ansible_module_vsphere_guest.py", line 773, in update_disks hdd_id = vm._devices[dev_key]['label'].split()[2] IndexError: list index out of range ```
main
vsphere guest index out of range exception while reconfiguring disk size issue type bug report component name ansible module vsphere guest ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment summary got index out of range exception while configuring vm with vsphere guest steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost gather facts false connection local roles vm create tasks file for vm create name check for dependency python pip yum name item state latest with items python pip name check for dependencies pip name item state latest with items pysphere pyvmomi name create vm from template vsphere guest vcenter hostname vcenter hostname username vcenter user password vcenter pass guest test from template yes template src vm template cluster cluster resource pool resource pool power on after clone no tags create name reconfigure vm vsphere guest vcenter hostname vcenter hostname username vcenter user password vcenter pass guest test state reconfigured vm extra config notes created with ansible vsphere vm disk size gb disk main type thin datastore datastore size gb disk var type thin datastore datastore size gb disk opt type thin datastore datastore size gb disk home type thin datastore datastore vm nic type network vm network network type standard vm hardware memory mb memory num cpus cpucount osid osid scsi paravirtual esxi datacenter datacenter hostname esxi host expected results normal playthrough with reconfigured disk sizes actual results creating vm from template works fine but reconfiguring fails with exception an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible bsledg ansible module vsphere guest py line in main file tmp ansible bsledg ansible module vsphere guest py line in main force force file tmp ansible bsledg ansible module vsphere guest py line in reconfigure vm module vm disk changes file tmp ansible bsledg ansible module vsphere guest py line in update disks hdd id vm devices split indexerror list index out of range
1
56,611
15,214,786,941
IssuesEvent
2021-02-17 13:39:36
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
opened
CLP required segments missing asterisk
Campaign landing page Defect
**Describe the defect** **To Reproduce** Steps to reproduce the behavior: 1. Go to /node/add/campaign_landing_page 2. Required segments are missing asterisk. **Expected** Required segments are missing should have an asterisk. **Screenshots** ![image](https://user-images.githubusercontent.com/643678/108212099-7c554f80-70fb-11eb-8702-b5bc23fcb9b2.png)
1.0
CLP required segments missing asterisk - **Describe the defect** **To Reproduce** Steps to reproduce the behavior: 1. Go to /node/add/campaign_landing_page 2. Required segments are missing asterisk. **Expected** Required segments are missing should have an asterisk. **Screenshots** ![image](https://user-images.githubusercontent.com/643678/108212099-7c554f80-70fb-11eb-8702-b5bc23fcb9b2.png)
non_main
clp required segments missing asterisk describe the defect to reproduce steps to reproduce the behavior go to node add campaign landing page required segments are missing asterisk expected required segments are missing should have an asterisk screenshots
0
37,269
15,223,734,588
IssuesEvent
2021-02-18 03:24:38
Azure/azure-powershell
https://api.github.com/repos/Azure/azure-powershell
closed
Unable to deserialize response for get-AzDataFactoryV2
Data Factory Service Attention customer-reported question
<!-- - Make sure you are able to reproduce this issue on the latest released version of Az - https://www.powershellgallery.com/packages/Az - Please search the existing issues to see if there has been a similar issue filed - For issue related to importing a module, please refer to our troubleshooting guide: - https://github.com/Azure/azure-powershell/blob/master/documentation/troubleshoot-module-load.md --> ## Description Hi, @NowinskiK discovered this bug here: https://github.com/SQLPlayer/azure.datafactory.tools/issues/85 When I run `Get-AzDataFactoryV2LinkedService` against my ADF, I get an error due to the linked service below. If you need any more information, please let me know! Thanks ``` { "name": "REST_AUTHBASIC_GEN", "type": "Microsoft.DataFactory/factories/linkedservices", "properties": { "parameters": { "baseUrl": { "type": "string", "defaultValue": "*********" }, "authSecret": { "type": "string", "defaultValue": "none" } }, "annotations": [], "type": "RestService", "typeProperties": { "url": "@{linkedService().baseUrl}", "enableServerCertificateValidation": true, "authenticationType": "Basic", "userName": "*********", "password": "@{linkedService().authSecret}" } } } ``` ## Environment data <!-- Please run $PSVersionTable and paste the output in the below code block If running the Docker container image, indicate the tag of the image used and the version of Docker engine--> ``` 2021-02-09T12:37:01.4005246Z Name Value 2021-02-09T12:37:01.4019914Z ---- ----- 2021-02-09T12:37:01.4036660Z PSVersion 5.1.14393.3866 2021-02-09T12:37:01.4037395Z PSEdition Desktop 2021-02-09T12:37:01.4038159Z PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} 2021-02-09T12:37:01.4040107Z BuildVersion 10.0.14393.3866 2021-02-09T12:37:01.4040758Z CLRVersion 4.0.30319.42000 2021-02-09T12:37:01.4041403Z WSManStackVersion 3.0 2021-02-09T12:37:01.4042100Z PSRemotingProtocolVersion 2.3 2021-02-09T12:37:01.4042734Z SerializationVersion 1.1.0.1 ``` ## Module versions <!-- Please run (Get-Module -ListAvailable) and paste the output in the below code block --> I am running this from DevOps, Microsoft hosted agent, the only thing I install is: ```powershell: Install-Module -Name azure.datafactory.tools -Scope CurrentUser -Force Import-Module -Name azure.datafactory.tools ``` ## Debug output <!-- Set $DebugPreference='Continue' before running the repro and paste the resulting debug stream in the below code block ATTENTION: Be sure to remove any sensitive information that may be in the logs --> ``` 2021-02-08T18:28:58.5176788Z Azure Data Factory (instance) loaded. 2021-02-08T18:28:59.7374802Z DataSets: 20 object(s) loaded. 2021-02-08T18:29:00.8439176Z IntegrationRuntimes: 2 object(s) loaded. 2021-02-08T18:29:01.1717909Z ##[debug]Error record: 2021-02-08T18:29:01.2492297Z ##[debug]Get-AzDataFactoryV2LinkedService : Unable to deserialize the response. 2021-02-08T18:29:01.2504029Z ##[debug]At C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\public\Get-AdfFromService.ps1:44 char:27 2021-02-08T18:29:01.2516885Z ##[debug]+ ... dServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$Res ... 2021-02-08T18:29:01.2528934Z ##[debug]+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-02-08T18:29:01.2540544Z ##[debug] + CategoryInfo : CloseError: (:) [Get-AzDataFactoryV2LinkedService], SerializationException 2021-02-08T18:29:01.2552972Z ##[debug] + FullyQualifiedErrorId : Microsoft.Azure.Commands.DataFactoryV2.GetAzureDataFactoryLinkedServiceCommand 2021-02-08T18:29:01.2563554Z ##[debug] 2021-02-08T18:29:01.2580886Z ##[debug]Script stack trace: 2021-02-08T18:29:01.2614328Z ##[debug]at Get-AdfFromService, C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\public\Get-AdfFromService.ps1: line 44 2021-02-08T18:29:01.2627206Z ##[debug]at <ScriptBlock>, D:\a\_temp\397591c2-b806-460c-9108-abec7135dcd6.ps1: line 39 2021-02-08T18:29:01.2640417Z ##[debug]at <ScriptBlock>, <No file>: line 1 2021-02-08T18:29:01.2657880Z ##[debug]Exception: 2021-02-08T18:29:01.2745660Z ##[debug]Microsoft.Rest.SerializationException: Unable to deserialize the response. ---> Newtonsoft.Json.JsonReaderException: Error reading JObject from JsonReader. Current JsonReader item is not an object: String. Path 'typeProperties.password', line 1, position 2808. 2021-02-08T18:29:01.2756374Z ##[debug] at Newtonsoft.Json.Linq.JObject.Load(JsonReader reader, JsonLoadSettings settings) 2021-02-08T18:29:01.2767925Z ##[debug] at Microsoft.Rest.Serialization.PolymorphicDeserializeJsonConverter`1.ReadJson(JsonReader reader, Type objectType, Object existingValue, JsonSerializer serializer) 2021-02-08T18:29:01.2781510Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter converter, JsonReader reader, Type objectType, Object existingValue) 2021-02-08T18:29:01.2792862Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent) 2021-02-08T18:29:01.2802888Z ##[debug] at Newtonsoft.Json.Linq.JToken.ToObject(Type objectType, JsonSerializer jsonSerializer) 2021-02-08T18:29:01.2813976Z ##[debug] at Microsoft.Rest.Serialization.PolymorphicDeserializeJsonConverter`1.ReadJson(JsonReader reader, Type objectType, Object existingValue, JsonSerializer serializer) 2021-02-08T18:29:01.2830678Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter converter, JsonReader reader, Type objectType, Object existingValue) 2021-02-08T18:29:01.2860423Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target) 2021-02-08T18:29:01.2870877Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2021-02-08T18:29:01.2881562Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue) 2021-02-08T18:29:01.2892034Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateList(IList list, JsonReader reader, JsonArrayContract contract, JsonProperty containerProperty, String id) 2021-02-08T18:29:01.2911488Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateList(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, Object existingValue, String id) 2021-02-08T18:29:01.2936270Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target) 2021-02-08T18:29:01.2949225Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2021-02-08T18:29:01.2962256Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue) 2021-02-08T18:29:01.2972620Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent) 2021-02-08T18:29:01.2986614Z ##[debug] at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType) 2021-02-08T18:29:01.2998535Z ##[debug] at Microsoft.Rest.Serialization.SafeJsonConvert.DeserializeObject[T](String json, JsonSerializerSettings settings) 2021-02-08T18:29:01.3009068Z ##[debug] at Microsoft.Azure.Management.DataFactory.LinkedServicesOperations.<ListByFactoryWithHttpMessagesAsync>d__5.MoveNext() 2021-02-08T18:29:01.3019700Z ##[debug] --- End of inner exception stack trace --- 2021-02-08T18:29:01.3030327Z ##[debug] at Microsoft.Azure.Management.DataFactory.LinkedServicesOperations.<ListByFactoryWithHttpMessagesAsync>d__5.MoveNext() 2021-02-08T18:29:01.3040510Z ##[debug]--- End of stack trace from previous location where exception was thrown --- 2021-02-08T18:29:01.3051684Z ##[debug] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2021-02-08T18:29:01.3062202Z ##[debug] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2021-02-08T18:29:01.3074609Z ##[debug] at Microsoft.Azure.Management.DataFactory.LinkedServicesOperationsExtensions.<ListByFactoryAsync>d__1.MoveNext() 2021-02-08T18:29:01.3085232Z ##[debug]--- End of stack trace from previous location where exception was thrown --- 2021-02-08T18:29:01.3097779Z ##[debug] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2021-02-08T18:29:01.3107934Z ##[debug] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2021-02-08T18:29:01.3119011Z ##[debug] at Microsoft.Azure.Management.DataFactory.LinkedServicesOperationsExtensions.ListByFactory(ILinkedServicesOperations operations, String resourceGroupName, String factoryName) 2021-02-08T18:29:01.3129323Z ##[debug] at Microsoft.Azure.Commands.DataFactoryV2.DataFactoryClient.ListLinkedServices(AdfEntityFilterOptions filterOptions) 2021-02-08T18:29:01.3140276Z ##[debug] at Microsoft.Azure.Commands.DataFactoryV2.DataFactoryClient.FilterPSLinkedServices(AdfEntityFilterOptions filterOptions) 2021-02-08T18:29:01.3152830Z ##[debug] at Microsoft.Azure.Commands.DataFactoryV2.GetAzureDataFactoryLinkedServiceCommand.ExecuteCmdlet() 2021-02-08T18:29:01.3161391Z ##[debug] at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() 2021-02-08T18:29:01.3391470Z ##[error]Unable to deserialize the response. 2021-02-08T18:29:01.3405939Z ##[debug]Processed: ##vso[task.logissue type=error]Unable to deserialize the response. 2021-02-08T18:29:01.4281449Z ##[debug]Exit code: 1 2021-02-08T18:29:01.4755596Z ##[debug]Leaving Invoke-VstsTool. 2021-02-08T18:29:01.4756553Z ##[error]PowerShell exited with code '1'. 2021-02-08T18:29:01.4757537Z ##[debug]Processed: ##vso[task.logissue type=error]PowerShell exited with code '1'. 2021-02-08T18:29:01.4790415Z ##[debug]Processed: ##vso[task.complete result=Failed]Error detected 2021-02-08T18:29:01.4808687Z ##[debug]Loading module from path 'D:\a\_tasks\AzurePowerShell_72a1931b-effb-4d2e-8fd8-f8472a07cb62\5.179.0\ps_modules\VstsAzureHelpers_\VstsAzureHelpers_.psm1'. 2021-02-08T18:29:01.5217853Z ##[debug]$OVERRIDING $global:DebugPreference from 'Continue' to 'SilentlyContinue'. 2021-02-08T18:29:01.5636401Z ##[debug]Loading resource strings from: D:\a\_tasks\AzurePowerShell_72a1931b-effb-4d2e-8fd8-f8472a07cb62\5.179.0\ps_modules\VstsAzureHelpers_/module.json 2021-02-08T18:29:01.5980430Z ##[debug]Loaded 13 strings. 2021-02-08T18:29:01.5981026Z ##[debug]SYSTEM_CULTURE: 'en-US' 2021-02-08T18:29:01.6022620Z ##[debug]Loading resource strings from: D:\a\_tasks\AzurePowerShell_72a1931b-effb-4d2e-8fd8-f8472a07cb62\5.179.0\ps_modules\VstsAzureHelpers_\Strings\resources.resjson\en-US\resources.resjson 2021-02-08T18:29:01.6311941Z ##[debug]Loaded 13 strings. ``` ## Error output <!-- Please run Resolve-AzError and paste the output in the below code block ATTENTION: Be sure to remove any sensitive information that may be in the logs --> ``` 2021-02-09T12:51:48.9094113Z WARNING: Upcoming breaking changes in the cmdlet 'Resolve-AzError' : 2021-02-09T12:51:48.9094619Z 2021-02-09T12:51:48.9095217Z The `Resolve-Error` alias will be removed in a future release. Please change any scripts that use this alias to use 2021-02-09T12:51:48.9096337Z `Resolve-AzError` instead. 2021-02-09T12:51:48.9096645Z 2021-02-09T12:51:48.9097272Z Note : Go to https://aka.ms/azps-changewarnings for steps to suppress this breaking change warning, and other 2021-02-09T12:51:48.9098046Z information on breaking changes in Azure PowerShell. 2021-02-09T12:51:48.9554248Z 2021-02-09T12:51:48.9564390Z 2021-02-09T12:51:48.9565441Z HistoryId: 1 2021-02-09T12:51:48.9565914Z 2021-02-09T12:51:48.9566594Z 2021-02-09T12:51:48.9616029Z Message : Unable to deserialize the response. 2021-02-09T12:51:48.9638723Z StackTrace : at Microsoft.Azure.Management.DataFactory.LinkedServicesOperations.<ListByFactoryWithHttpMessagesAs 2021-02-09T12:51:48.9639612Z ync>d__5.MoveNext() 2021-02-09T12:51:48.9640302Z --- End of stack trace from previous location where exception was thrown --- 2021-02-09T12:51:48.9640927Z at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2021-02-09T12:51:48.9641569Z at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2021-02-09T12:51:48.9657480Z at Microsoft.Azure.Management.DataFactory.LinkedServicesOperationsExtensions.<ListByFactoryAsync>d_ 2021-02-09T12:51:48.9658150Z _1.MoveNext() 2021-02-09T12:51:48.9659595Z --- End of stack trace from previous location where exception was thrown --- 2021-02-09T12:51:48.9660244Z at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2021-02-09T12:51:48.9661522Z at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2021-02-09T12:51:48.9676475Z at Microsoft.Azure.Management.DataFactory.LinkedServicesOperationsExtensions.ListByFactory(ILinkedS 2021-02-09T12:51:48.9677481Z ervicesOperations operations, String resourceGroupName, String factoryName) 2021-02-09T12:51:48.9678008Z at 2021-02-09T12:51:48.9678488Z Microsoft.Azure.Commands.DataFactoryV2.DataFactoryClient.ListLinkedServices(AdfEntityFilterOptions 2021-02-09T12:51:48.9679654Z filterOptions) 2021-02-09T12:51:48.9698301Z at Microsoft.Azure.Commands.DataFactoryV2.DataFactoryClient.FilterPSLinkedServices(AdfEntityFilterO 2021-02-09T12:51:48.9698864Z ptions filterOptions) 2021-02-09T12:51:48.9699948Z at Microsoft.Azure.Commands.DataFactoryV2.GetAzureDataFactoryLinkedServiceCommand.ExecuteCmdlet() 2021-02-09T12:51:48.9700965Z at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() 2021-02-09T12:51:48.9701494Z Exception : Microsoft.Rest.SerializationException 2021-02-09T12:51:48.9701922Z InvocationInfo : {Get-AzDataFactoryV2LinkedService} 2021-02-09T12:51:48.9702497Z Line : $adf.LinkedServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$ResourceGroupName" 2021-02-09T12:51:48.9703807Z -DataFactoryName "$FactoryName" | ToArray 2021-02-09T12:51:48.9704194Z 2021-02-09T12:51:48.9725050Z Position : At C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\public 2021-02-09T12:51:48.9725683Z \Get-AdfFromService.ps1:44 char:27 2021-02-09T12:51:48.9726650Z + ... dServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$Res ... 2021-02-09T12:51:48.9727611Z + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-02-09T12:51:48.9728404Z HistoryId : 1 2021-02-09T12:51:48.9728867Z 2021-02-09T12:51:48.9761844Z Message : Error reading JObject from JsonReader. Current JsonReader item is not an object: String. Path 2021-02-09T12:51:48.9762602Z 'typeProperties.password', line 1, position 2808. 2021-02-09T12:51:48.9763157Z StackTrace : at Newtonsoft.Json.Linq.JObject.Load(JsonReader reader, JsonLoadSettings settings) 2021-02-09T12:51:48.9763827Z at Microsoft.Rest.Serialization.PolymorphicDeserializeJsonConverter`1.ReadJson(JsonReader reader, 2021-02-09T12:51:48.9765487Z Type objectType, Object existingValue, JsonSerializer serializer) 2021-02-09T12:51:48.9766163Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter 2021-02-09T12:51:48.9767263Z converter, JsonReader reader, Type objectType, Object existingValue) 2021-02-09T12:51:48.9767915Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type 2021-02-09T12:51:48.9769016Z objectType, Boolean checkAdditionalContent) 2021-02-09T12:51:48.9769592Z at Newtonsoft.Json.Linq.JToken.ToObject(Type objectType, JsonSerializer jsonSerializer) 2021-02-09T12:51:48.9770967Z at Microsoft.Rest.Serialization.PolymorphicDeserializeJsonConverter`1.ReadJson(JsonReader reader, 2021-02-09T12:51:48.9771583Z Type objectType, Object existingValue, JsonSerializer serializer) 2021-02-09T12:51:48.9773008Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter 2021-02-09T12:51:48.9773594Z converter, JsonReader reader, Type objectType, Object existingValue) 2021-02-09T12:51:48.9774162Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty 2021-02-09T12:51:48.9774923Z property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty 2021-02-09T12:51:48.9775475Z containerProperty, JsonReader reader, Object target) 2021-02-09T12:51:48.9776073Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, 2021-02-09T12:51:48.9777108Z JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2021-02-09T12:51:48.9777710Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type 2021-02-09T12:51:48.9779400Z objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, 2021-02-09T12:51:48.9780116Z JsonProperty containerMember, Object existingValue) 2021-02-09T12:51:48.9780660Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateList(IList list, JsonReader 2021-02-09T12:51:48.9781268Z reader, JsonArrayContract contract, JsonProperty containerProperty, String id) 2021-02-09T12:51:48.9781857Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateList(JsonReader reader, Type 2021-02-09T12:51:48.9782463Z objectType, JsonContract contract, JsonProperty member, Object existingValue, String id) 2021-02-09T12:51:48.9783049Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty 2021-02-09T12:51:48.9783660Z property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty 2021-02-09T12:51:48.9784214Z containerProperty, JsonReader reader, Object target) 2021-02-09T12:51:48.9784838Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, 2021-02-09T12:51:48.9786199Z JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2021-02-09T12:51:48.9787306Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type 2021-02-09T12:51:48.9787994Z objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, 2021-02-09T12:51:48.9788564Z JsonProperty containerMember, Object existingValue) 2021-02-09T12:51:48.9789129Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type 2021-02-09T12:51:48.9790112Z objectType, Boolean checkAdditionalContent) 2021-02-09T12:51:48.9790627Z at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType) 2021-02-09T12:51:48.9791202Z at Microsoft.Rest.Serialization.SafeJsonConvert.DeserializeObject[T](String json, 2021-02-09T12:51:48.9791688Z JsonSerializerSettings settings) 2021-02-09T12:51:48.9794958Z at Microsoft.Azure.Management.DataFactory.LinkedServicesOperations.<ListByFactoryWithHttpMessagesAs 2021-02-09T12:51:48.9795499Z ync>d__5.MoveNext() 2021-02-09T12:51:48.9796331Z Exception : Newtonsoft.Json.JsonReaderException 2021-02-09T12:51:48.9796757Z InvocationInfo : {Get-AzDataFactoryV2LinkedService} 2021-02-09T12:51:48.9797557Z Line : $adf.LinkedServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$ResourceGroupName" 2021-02-09T12:51:48.9798064Z -DataFactoryName "$FactoryName" | ToArray 2021-02-09T12:51:48.9799159Z 2021-02-09T12:51:48.9815793Z Position : At C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\public 2021-02-09T12:51:48.9816368Z \Get-AdfFromService.ps1:44 char:27 2021-02-09T12:51:48.9817622Z + ... dServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$Res ... 2021-02-09T12:51:48.9818480Z + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-02-09T12:51:48.9819192Z HistoryId : 1 2021-02-09T12:51:48.9819621Z 2021-02-09T12:51:48.9845016Z Message : Cannot find a variable with the name 'ADF_FOLDERS'. 2021-02-09T12:51:48.9845483Z StackTrace : 2021-02-09T12:51:48.9846654Z Exception : System.Management.Automation.ItemNotFoundException 2021-02-09T12:51:48.9847045Z InvocationInfo : {Get-Variable} 2021-02-09T12:51:48.9847478Z Line : if (!(Get-Variable ADF_FOLDERS -ErrorAction:SilentlyContinue)) { 2021-02-09T12:51:48.9847891Z 2021-02-09T12:51:48.9865388Z Position : At C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\privat 2021-02-09T12:51:48.9866003Z e\AdfObject.class.ps1:75 char:7 2021-02-09T12:51:48.9866994Z + if (!(Get-Variable ADF_FOLDERS -ErrorAction:SilentlyContinue)) { 2021-02-09T12:51:48.9867893Z + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-02-09T12:51:48.9868602Z HistoryId : 1 ```
1.0
Unable to deserialize response for get-AzDataFactoryV2 - <!-- - Make sure you are able to reproduce this issue on the latest released version of Az - https://www.powershellgallery.com/packages/Az - Please search the existing issues to see if there has been a similar issue filed - For issue related to importing a module, please refer to our troubleshooting guide: - https://github.com/Azure/azure-powershell/blob/master/documentation/troubleshoot-module-load.md --> ## Description Hi, @NowinskiK discovered this bug here: https://github.com/SQLPlayer/azure.datafactory.tools/issues/85 When I run `Get-AzDataFactoryV2LinkedService` against my ADF, I get an error due to the linked service below. If you need any more information, please let me know! Thanks ``` { "name": "REST_AUTHBASIC_GEN", "type": "Microsoft.DataFactory/factories/linkedservices", "properties": { "parameters": { "baseUrl": { "type": "string", "defaultValue": "*********" }, "authSecret": { "type": "string", "defaultValue": "none" } }, "annotations": [], "type": "RestService", "typeProperties": { "url": "@{linkedService().baseUrl}", "enableServerCertificateValidation": true, "authenticationType": "Basic", "userName": "*********", "password": "@{linkedService().authSecret}" } } } ``` ## Environment data <!-- Please run $PSVersionTable and paste the output in the below code block If running the Docker container image, indicate the tag of the image used and the version of Docker engine--> ``` 2021-02-09T12:37:01.4005246Z Name Value 2021-02-09T12:37:01.4019914Z ---- ----- 2021-02-09T12:37:01.4036660Z PSVersion 5.1.14393.3866 2021-02-09T12:37:01.4037395Z PSEdition Desktop 2021-02-09T12:37:01.4038159Z PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} 2021-02-09T12:37:01.4040107Z BuildVersion 10.0.14393.3866 2021-02-09T12:37:01.4040758Z CLRVersion 4.0.30319.42000 2021-02-09T12:37:01.4041403Z WSManStackVersion 3.0 2021-02-09T12:37:01.4042100Z PSRemotingProtocolVersion 2.3 2021-02-09T12:37:01.4042734Z SerializationVersion 1.1.0.1 ``` ## Module versions <!-- Please run (Get-Module -ListAvailable) and paste the output in the below code block --> I am running this from DevOps, Microsoft hosted agent, the only thing I install is: ```powershell: Install-Module -Name azure.datafactory.tools -Scope CurrentUser -Force Import-Module -Name azure.datafactory.tools ``` ## Debug output <!-- Set $DebugPreference='Continue' before running the repro and paste the resulting debug stream in the below code block ATTENTION: Be sure to remove any sensitive information that may be in the logs --> ``` 2021-02-08T18:28:58.5176788Z Azure Data Factory (instance) loaded. 2021-02-08T18:28:59.7374802Z DataSets: 20 object(s) loaded. 2021-02-08T18:29:00.8439176Z IntegrationRuntimes: 2 object(s) loaded. 2021-02-08T18:29:01.1717909Z ##[debug]Error record: 2021-02-08T18:29:01.2492297Z ##[debug]Get-AzDataFactoryV2LinkedService : Unable to deserialize the response. 2021-02-08T18:29:01.2504029Z ##[debug]At C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\public\Get-AdfFromService.ps1:44 char:27 2021-02-08T18:29:01.2516885Z ##[debug]+ ... dServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$Res ... 2021-02-08T18:29:01.2528934Z ##[debug]+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-02-08T18:29:01.2540544Z ##[debug] + CategoryInfo : CloseError: (:) [Get-AzDataFactoryV2LinkedService], SerializationException 2021-02-08T18:29:01.2552972Z ##[debug] + FullyQualifiedErrorId : Microsoft.Azure.Commands.DataFactoryV2.GetAzureDataFactoryLinkedServiceCommand 2021-02-08T18:29:01.2563554Z ##[debug] 2021-02-08T18:29:01.2580886Z ##[debug]Script stack trace: 2021-02-08T18:29:01.2614328Z ##[debug]at Get-AdfFromService, C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\public\Get-AdfFromService.ps1: line 44 2021-02-08T18:29:01.2627206Z ##[debug]at <ScriptBlock>, D:\a\_temp\397591c2-b806-460c-9108-abec7135dcd6.ps1: line 39 2021-02-08T18:29:01.2640417Z ##[debug]at <ScriptBlock>, <No file>: line 1 2021-02-08T18:29:01.2657880Z ##[debug]Exception: 2021-02-08T18:29:01.2745660Z ##[debug]Microsoft.Rest.SerializationException: Unable to deserialize the response. ---> Newtonsoft.Json.JsonReaderException: Error reading JObject from JsonReader. Current JsonReader item is not an object: String. Path 'typeProperties.password', line 1, position 2808. 2021-02-08T18:29:01.2756374Z ##[debug] at Newtonsoft.Json.Linq.JObject.Load(JsonReader reader, JsonLoadSettings settings) 2021-02-08T18:29:01.2767925Z ##[debug] at Microsoft.Rest.Serialization.PolymorphicDeserializeJsonConverter`1.ReadJson(JsonReader reader, Type objectType, Object existingValue, JsonSerializer serializer) 2021-02-08T18:29:01.2781510Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter converter, JsonReader reader, Type objectType, Object existingValue) 2021-02-08T18:29:01.2792862Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent) 2021-02-08T18:29:01.2802888Z ##[debug] at Newtonsoft.Json.Linq.JToken.ToObject(Type objectType, JsonSerializer jsonSerializer) 2021-02-08T18:29:01.2813976Z ##[debug] at Microsoft.Rest.Serialization.PolymorphicDeserializeJsonConverter`1.ReadJson(JsonReader reader, Type objectType, Object existingValue, JsonSerializer serializer) 2021-02-08T18:29:01.2830678Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter converter, JsonReader reader, Type objectType, Object existingValue) 2021-02-08T18:29:01.2860423Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target) 2021-02-08T18:29:01.2870877Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2021-02-08T18:29:01.2881562Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue) 2021-02-08T18:29:01.2892034Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateList(IList list, JsonReader reader, JsonArrayContract contract, JsonProperty containerProperty, String id) 2021-02-08T18:29:01.2911488Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateList(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, Object existingValue, String id) 2021-02-08T18:29:01.2936270Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty containerProperty, JsonReader reader, Object target) 2021-02-08T18:29:01.2949225Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2021-02-08T18:29:01.2962256Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue) 2021-02-08T18:29:01.2972620Z ##[debug] at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent) 2021-02-08T18:29:01.2986614Z ##[debug] at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType) 2021-02-08T18:29:01.2998535Z ##[debug] at Microsoft.Rest.Serialization.SafeJsonConvert.DeserializeObject[T](String json, JsonSerializerSettings settings) 2021-02-08T18:29:01.3009068Z ##[debug] at Microsoft.Azure.Management.DataFactory.LinkedServicesOperations.<ListByFactoryWithHttpMessagesAsync>d__5.MoveNext() 2021-02-08T18:29:01.3019700Z ##[debug] --- End of inner exception stack trace --- 2021-02-08T18:29:01.3030327Z ##[debug] at Microsoft.Azure.Management.DataFactory.LinkedServicesOperations.<ListByFactoryWithHttpMessagesAsync>d__5.MoveNext() 2021-02-08T18:29:01.3040510Z ##[debug]--- End of stack trace from previous location where exception was thrown --- 2021-02-08T18:29:01.3051684Z ##[debug] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2021-02-08T18:29:01.3062202Z ##[debug] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2021-02-08T18:29:01.3074609Z ##[debug] at Microsoft.Azure.Management.DataFactory.LinkedServicesOperationsExtensions.<ListByFactoryAsync>d__1.MoveNext() 2021-02-08T18:29:01.3085232Z ##[debug]--- End of stack trace from previous location where exception was thrown --- 2021-02-08T18:29:01.3097779Z ##[debug] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2021-02-08T18:29:01.3107934Z ##[debug] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2021-02-08T18:29:01.3119011Z ##[debug] at Microsoft.Azure.Management.DataFactory.LinkedServicesOperationsExtensions.ListByFactory(ILinkedServicesOperations operations, String resourceGroupName, String factoryName) 2021-02-08T18:29:01.3129323Z ##[debug] at Microsoft.Azure.Commands.DataFactoryV2.DataFactoryClient.ListLinkedServices(AdfEntityFilterOptions filterOptions) 2021-02-08T18:29:01.3140276Z ##[debug] at Microsoft.Azure.Commands.DataFactoryV2.DataFactoryClient.FilterPSLinkedServices(AdfEntityFilterOptions filterOptions) 2021-02-08T18:29:01.3152830Z ##[debug] at Microsoft.Azure.Commands.DataFactoryV2.GetAzureDataFactoryLinkedServiceCommand.ExecuteCmdlet() 2021-02-08T18:29:01.3161391Z ##[debug] at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() 2021-02-08T18:29:01.3391470Z ##[error]Unable to deserialize the response. 2021-02-08T18:29:01.3405939Z ##[debug]Processed: ##vso[task.logissue type=error]Unable to deserialize the response. 2021-02-08T18:29:01.4281449Z ##[debug]Exit code: 1 2021-02-08T18:29:01.4755596Z ##[debug]Leaving Invoke-VstsTool. 2021-02-08T18:29:01.4756553Z ##[error]PowerShell exited with code '1'. 2021-02-08T18:29:01.4757537Z ##[debug]Processed: ##vso[task.logissue type=error]PowerShell exited with code '1'. 2021-02-08T18:29:01.4790415Z ##[debug]Processed: ##vso[task.complete result=Failed]Error detected 2021-02-08T18:29:01.4808687Z ##[debug]Loading module from path 'D:\a\_tasks\AzurePowerShell_72a1931b-effb-4d2e-8fd8-f8472a07cb62\5.179.0\ps_modules\VstsAzureHelpers_\VstsAzureHelpers_.psm1'. 2021-02-08T18:29:01.5217853Z ##[debug]$OVERRIDING $global:DebugPreference from 'Continue' to 'SilentlyContinue'. 2021-02-08T18:29:01.5636401Z ##[debug]Loading resource strings from: D:\a\_tasks\AzurePowerShell_72a1931b-effb-4d2e-8fd8-f8472a07cb62\5.179.0\ps_modules\VstsAzureHelpers_/module.json 2021-02-08T18:29:01.5980430Z ##[debug]Loaded 13 strings. 2021-02-08T18:29:01.5981026Z ##[debug]SYSTEM_CULTURE: 'en-US' 2021-02-08T18:29:01.6022620Z ##[debug]Loading resource strings from: D:\a\_tasks\AzurePowerShell_72a1931b-effb-4d2e-8fd8-f8472a07cb62\5.179.0\ps_modules\VstsAzureHelpers_\Strings\resources.resjson\en-US\resources.resjson 2021-02-08T18:29:01.6311941Z ##[debug]Loaded 13 strings. ``` ## Error output <!-- Please run Resolve-AzError and paste the output in the below code block ATTENTION: Be sure to remove any sensitive information that may be in the logs --> ``` 2021-02-09T12:51:48.9094113Z WARNING: Upcoming breaking changes in the cmdlet 'Resolve-AzError' : 2021-02-09T12:51:48.9094619Z 2021-02-09T12:51:48.9095217Z The `Resolve-Error` alias will be removed in a future release. Please change any scripts that use this alias to use 2021-02-09T12:51:48.9096337Z `Resolve-AzError` instead. 2021-02-09T12:51:48.9096645Z 2021-02-09T12:51:48.9097272Z Note : Go to https://aka.ms/azps-changewarnings for steps to suppress this breaking change warning, and other 2021-02-09T12:51:48.9098046Z information on breaking changes in Azure PowerShell. 2021-02-09T12:51:48.9554248Z 2021-02-09T12:51:48.9564390Z 2021-02-09T12:51:48.9565441Z HistoryId: 1 2021-02-09T12:51:48.9565914Z 2021-02-09T12:51:48.9566594Z 2021-02-09T12:51:48.9616029Z Message : Unable to deserialize the response. 2021-02-09T12:51:48.9638723Z StackTrace : at Microsoft.Azure.Management.DataFactory.LinkedServicesOperations.<ListByFactoryWithHttpMessagesAs 2021-02-09T12:51:48.9639612Z ync>d__5.MoveNext() 2021-02-09T12:51:48.9640302Z --- End of stack trace from previous location where exception was thrown --- 2021-02-09T12:51:48.9640927Z at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2021-02-09T12:51:48.9641569Z at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2021-02-09T12:51:48.9657480Z at Microsoft.Azure.Management.DataFactory.LinkedServicesOperationsExtensions.<ListByFactoryAsync>d_ 2021-02-09T12:51:48.9658150Z _1.MoveNext() 2021-02-09T12:51:48.9659595Z --- End of stack trace from previous location where exception was thrown --- 2021-02-09T12:51:48.9660244Z at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2021-02-09T12:51:48.9661522Z at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2021-02-09T12:51:48.9676475Z at Microsoft.Azure.Management.DataFactory.LinkedServicesOperationsExtensions.ListByFactory(ILinkedS 2021-02-09T12:51:48.9677481Z ervicesOperations operations, String resourceGroupName, String factoryName) 2021-02-09T12:51:48.9678008Z at 2021-02-09T12:51:48.9678488Z Microsoft.Azure.Commands.DataFactoryV2.DataFactoryClient.ListLinkedServices(AdfEntityFilterOptions 2021-02-09T12:51:48.9679654Z filterOptions) 2021-02-09T12:51:48.9698301Z at Microsoft.Azure.Commands.DataFactoryV2.DataFactoryClient.FilterPSLinkedServices(AdfEntityFilterO 2021-02-09T12:51:48.9698864Z ptions filterOptions) 2021-02-09T12:51:48.9699948Z at Microsoft.Azure.Commands.DataFactoryV2.GetAzureDataFactoryLinkedServiceCommand.ExecuteCmdlet() 2021-02-09T12:51:48.9700965Z at Microsoft.WindowsAzure.Commands.Utilities.Common.AzurePSCmdlet.ProcessRecord() 2021-02-09T12:51:48.9701494Z Exception : Microsoft.Rest.SerializationException 2021-02-09T12:51:48.9701922Z InvocationInfo : {Get-AzDataFactoryV2LinkedService} 2021-02-09T12:51:48.9702497Z Line : $adf.LinkedServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$ResourceGroupName" 2021-02-09T12:51:48.9703807Z -DataFactoryName "$FactoryName" | ToArray 2021-02-09T12:51:48.9704194Z 2021-02-09T12:51:48.9725050Z Position : At C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\public 2021-02-09T12:51:48.9725683Z \Get-AdfFromService.ps1:44 char:27 2021-02-09T12:51:48.9726650Z + ... dServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$Res ... 2021-02-09T12:51:48.9727611Z + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-02-09T12:51:48.9728404Z HistoryId : 1 2021-02-09T12:51:48.9728867Z 2021-02-09T12:51:48.9761844Z Message : Error reading JObject from JsonReader. Current JsonReader item is not an object: String. Path 2021-02-09T12:51:48.9762602Z 'typeProperties.password', line 1, position 2808. 2021-02-09T12:51:48.9763157Z StackTrace : at Newtonsoft.Json.Linq.JObject.Load(JsonReader reader, JsonLoadSettings settings) 2021-02-09T12:51:48.9763827Z at Microsoft.Rest.Serialization.PolymorphicDeserializeJsonConverter`1.ReadJson(JsonReader reader, 2021-02-09T12:51:48.9765487Z Type objectType, Object existingValue, JsonSerializer serializer) 2021-02-09T12:51:48.9766163Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter 2021-02-09T12:51:48.9767263Z converter, JsonReader reader, Type objectType, Object existingValue) 2021-02-09T12:51:48.9767915Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type 2021-02-09T12:51:48.9769016Z objectType, Boolean checkAdditionalContent) 2021-02-09T12:51:48.9769592Z at Newtonsoft.Json.Linq.JToken.ToObject(Type objectType, JsonSerializer jsonSerializer) 2021-02-09T12:51:48.9770967Z at Microsoft.Rest.Serialization.PolymorphicDeserializeJsonConverter`1.ReadJson(JsonReader reader, 2021-02-09T12:51:48.9771583Z Type objectType, Object existingValue, JsonSerializer serializer) 2021-02-09T12:51:48.9773008Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.DeserializeConvertable(JsonConverter 2021-02-09T12:51:48.9773594Z converter, JsonReader reader, Type objectType, Object existingValue) 2021-02-09T12:51:48.9774162Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty 2021-02-09T12:51:48.9774923Z property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty 2021-02-09T12:51:48.9775475Z containerProperty, JsonReader reader, Object target) 2021-02-09T12:51:48.9776073Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, 2021-02-09T12:51:48.9777108Z JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2021-02-09T12:51:48.9777710Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type 2021-02-09T12:51:48.9779400Z objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, 2021-02-09T12:51:48.9780116Z JsonProperty containerMember, Object existingValue) 2021-02-09T12:51:48.9780660Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateList(IList list, JsonReader 2021-02-09T12:51:48.9781268Z reader, JsonArrayContract contract, JsonProperty containerProperty, String id) 2021-02-09T12:51:48.9781857Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateList(JsonReader reader, Type 2021-02-09T12:51:48.9782463Z objectType, JsonContract contract, JsonProperty member, Object existingValue, String id) 2021-02-09T12:51:48.9783049Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue(JsonProperty 2021-02-09T12:51:48.9783660Z property, JsonConverter propertyConverter, JsonContainerContract containerContract, JsonProperty 2021-02-09T12:51:48.9784214Z containerProperty, JsonReader reader, Object target) 2021-02-09T12:51:48.9784838Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.PopulateObject(Object newObject, 2021-02-09T12:51:48.9786199Z JsonReader reader, JsonObjectContract contract, JsonProperty member, String id) 2021-02-09T12:51:48.9787306Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type 2021-02-09T12:51:48.9787994Z objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, 2021-02-09T12:51:48.9788564Z JsonProperty containerMember, Object existingValue) 2021-02-09T12:51:48.9789129Z at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type 2021-02-09T12:51:48.9790112Z objectType, Boolean checkAdditionalContent) 2021-02-09T12:51:48.9790627Z at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType) 2021-02-09T12:51:48.9791202Z at Microsoft.Rest.Serialization.SafeJsonConvert.DeserializeObject[T](String json, 2021-02-09T12:51:48.9791688Z JsonSerializerSettings settings) 2021-02-09T12:51:48.9794958Z at Microsoft.Azure.Management.DataFactory.LinkedServicesOperations.<ListByFactoryWithHttpMessagesAs 2021-02-09T12:51:48.9795499Z ync>d__5.MoveNext() 2021-02-09T12:51:48.9796331Z Exception : Newtonsoft.Json.JsonReaderException 2021-02-09T12:51:48.9796757Z InvocationInfo : {Get-AzDataFactoryV2LinkedService} 2021-02-09T12:51:48.9797557Z Line : $adf.LinkedServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$ResourceGroupName" 2021-02-09T12:51:48.9798064Z -DataFactoryName "$FactoryName" | ToArray 2021-02-09T12:51:48.9799159Z 2021-02-09T12:51:48.9815793Z Position : At C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\public 2021-02-09T12:51:48.9816368Z \Get-AdfFromService.ps1:44 char:27 2021-02-09T12:51:48.9817622Z + ... dServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName "$Res ... 2021-02-09T12:51:48.9818480Z + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-02-09T12:51:48.9819192Z HistoryId : 1 2021-02-09T12:51:48.9819621Z 2021-02-09T12:51:48.9845016Z Message : Cannot find a variable with the name 'ADF_FOLDERS'. 2021-02-09T12:51:48.9845483Z StackTrace : 2021-02-09T12:51:48.9846654Z Exception : System.Management.Automation.ItemNotFoundException 2021-02-09T12:51:48.9847045Z InvocationInfo : {Get-Variable} 2021-02-09T12:51:48.9847478Z Line : if (!(Get-Variable ADF_FOLDERS -ErrorAction:SilentlyContinue)) { 2021-02-09T12:51:48.9847891Z 2021-02-09T12:51:48.9865388Z Position : At C:\Users\VssAdministrator\Documents\WindowsPowerShell\Modules\azure.datafactory.tools\0.50.0\privat 2021-02-09T12:51:48.9866003Z e\AdfObject.class.ps1:75 char:7 2021-02-09T12:51:48.9866994Z + if (!(Get-Variable ADF_FOLDERS -ErrorAction:SilentlyContinue)) { 2021-02-09T12:51:48.9867893Z + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2021-02-09T12:51:48.9868602Z HistoryId : 1 ```
non_main
unable to deserialize response for get make sure you are able to reproduce this issue on the latest released version of az please search the existing issues to see if there has been a similar issue filed for issue related to importing a module please refer to our troubleshooting guide description hi nowinskik discovered this bug here when i run get against my adf i get an error due to the linked service below if you need any more information please let me know thanks name rest authbasic gen type microsoft datafactory factories linkedservices properties parameters baseurl type string defaultvalue authsecret type string defaultvalue none annotations type restservice typeproperties url linkedservice baseurl enableservercertificatevalidation true authenticationtype basic username password linkedservice authsecret environment data please run psversiontable and paste the output in the below code block if running the docker container image indicate the tag of the image used and the version of docker engine name value psversion psedition desktop pscompatibleversions buildversion clrversion wsmanstackversion psremotingprotocolversion serializationversion module versions i am running this from devops microsoft hosted agent the only thing i install is powershell install module name azure datafactory tools scope currentuser force import module name azure datafactory tools debug output set debugpreference continue before running the repro and paste the resulting debug stream in the below code block attention be sure to remove any sensitive information that may be in the logs azure data factory instance loaded datasets object s loaded integrationruntimes object s loaded error record get unable to deserialize the response at c users vssadministrator documents windowspowershell modules azure datafactory tools public get adffromservice char dservices get resourcegroupname res categoryinfo closeerror serializationexception fullyqualifiederrorid microsoft azure commands getazuredatafactorylinkedservicecommand script stack trace at get adffromservice c users vssadministrator documents windowspowershell modules azure datafactory tools public get adffromservice line at d a temp line at line exception microsoft rest serializationexception unable to deserialize the response newtonsoft json jsonreaderexception error reading jobject from jsonreader current jsonreader item is not an object string path typeproperties password line position at newtonsoft json linq jobject load jsonreader reader jsonloadsettings settings at microsoft rest serialization polymorphicdeserializejsonconverter readjson jsonreader reader type objecttype object existingvalue jsonserializer serializer at newtonsoft json serialization jsonserializerinternalreader deserializeconvertable jsonconverter converter jsonreader reader type objecttype object existingvalue at newtonsoft json serialization jsonserializerinternalreader deserialize jsonreader reader type objecttype boolean checkadditionalcontent at newtonsoft json linq jtoken toobject type objecttype jsonserializer jsonserializer at microsoft rest serialization polymorphicdeserializejsonconverter readjson jsonreader reader type objecttype object existingvalue jsonserializer serializer at newtonsoft json serialization jsonserializerinternalreader deserializeconvertable jsonconverter converter jsonreader reader type objecttype object existingvalue at newtonsoft json serialization jsonserializerinternalreader setpropertyvalue jsonproperty property jsonconverter propertyconverter jsoncontainercontract containercontract jsonproperty containerproperty jsonreader reader object target at newtonsoft json serialization jsonserializerinternalreader populateobject object newobject jsonreader reader jsonobjectcontract contract jsonproperty member string id at newtonsoft json serialization jsonserializerinternalreader createobject jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader populatelist ilist list jsonreader reader jsonarraycontract contract jsonproperty containerproperty string id at newtonsoft json serialization jsonserializerinternalreader createlist jsonreader reader type objecttype jsoncontract contract jsonproperty member object existingvalue string id at newtonsoft json serialization jsonserializerinternalreader setpropertyvalue jsonproperty property jsonconverter propertyconverter jsoncontainercontract containercontract jsonproperty containerproperty jsonreader reader object target at newtonsoft json serialization jsonserializerinternalreader populateobject object newobject jsonreader reader jsonobjectcontract contract jsonproperty member string id at newtonsoft json serialization jsonserializerinternalreader createobject jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader deserialize jsonreader reader type objecttype boolean checkadditionalcontent at newtonsoft json jsonserializer deserializeinternal jsonreader reader type objecttype at microsoft rest serialization safejsonconvert deserializeobject string json jsonserializersettings settings at microsoft azure management datafactory linkedservicesoperations d movenext end of inner exception stack trace at microsoft azure management datafactory linkedservicesoperations d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft azure management datafactory linkedservicesoperationsextensions d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft azure management datafactory linkedservicesoperationsextensions listbyfactory ilinkedservicesoperations operations string resourcegroupname string factoryname at microsoft azure commands datafactoryclient listlinkedservices adfentityfilteroptions filteroptions at microsoft azure commands datafactoryclient filterpslinkedservices adfentityfilteroptions filteroptions at microsoft azure commands getazuredatafactorylinkedservicecommand executecmdlet at microsoft windowsazure commands utilities common azurepscmdlet processrecord unable to deserialize the response processed vso unable to deserialize the response exit code leaving invoke vststool powershell exited with code processed vso powershell exited with code processed vso error detected loading module from path d a tasks azurepowershell effb ps modules vstsazurehelpers vstsazurehelpers overriding global debugpreference from continue to silentlycontinue loading resource strings from d a tasks azurepowershell effb ps modules vstsazurehelpers module json loaded strings system culture en us loading resource strings from d a tasks azurepowershell effb ps modules vstsazurehelpers strings resources resjson en us resources resjson loaded strings error output please run resolve azerror and paste the output in the below code block attention be sure to remove any sensitive information that may be in the logs warning upcoming breaking changes in the cmdlet resolve azerror the resolve error alias will be removed in a future release please change any scripts that use this alias to use resolve azerror instead note go to for steps to suppress this breaking change warning and other information on breaking changes in azure powershell historyid message unable to deserialize the response stacktrace at microsoft azure management datafactory linkedservicesoperations listbyfactorywithhttpmessagesas ync d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft azure management datafactory linkedservicesoperationsextensions d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft azure management datafactory linkedservicesoperationsextensions listbyfactory ilinkeds ervicesoperations operations string resourcegroupname string factoryname at microsoft azure commands datafactoryclient listlinkedservices adfentityfilteroptions filteroptions at microsoft azure commands datafactoryclient filterpslinkedservices adfentityfiltero ptions filteroptions at microsoft azure commands getazuredatafactorylinkedservicecommand executecmdlet at microsoft windowsazure commands utilities common azurepscmdlet processrecord exception microsoft rest serializationexception invocationinfo get line adf linkedservices get resourcegroupname resourcegroupname datafactoryname factoryname toarray position at c users vssadministrator documents windowspowershell modules azure datafactory tools public get adffromservice char dservices get resourcegroupname res historyid message error reading jobject from jsonreader current jsonreader item is not an object string path typeproperties password line position stacktrace at newtonsoft json linq jobject load jsonreader reader jsonloadsettings settings at microsoft rest serialization polymorphicdeserializejsonconverter readjson jsonreader reader type objecttype object existingvalue jsonserializer serializer at newtonsoft json serialization jsonserializerinternalreader deserializeconvertable jsonconverter converter jsonreader reader type objecttype object existingvalue at newtonsoft json serialization jsonserializerinternalreader deserialize jsonreader reader type objecttype boolean checkadditionalcontent at newtonsoft json linq jtoken toobject type objecttype jsonserializer jsonserializer at microsoft rest serialization polymorphicdeserializejsonconverter readjson jsonreader reader type objecttype object existingvalue jsonserializer serializer at newtonsoft json serialization jsonserializerinternalreader deserializeconvertable jsonconverter converter jsonreader reader type objecttype object existingvalue at newtonsoft json serialization jsonserializerinternalreader setpropertyvalue jsonproperty property jsonconverter propertyconverter jsoncontainercontract containercontract jsonproperty containerproperty jsonreader reader object target at newtonsoft json serialization jsonserializerinternalreader populateobject object newobject jsonreader reader jsonobjectcontract contract jsonproperty member string id at newtonsoft json serialization jsonserializerinternalreader createobject jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader populatelist ilist list jsonreader reader jsonarraycontract contract jsonproperty containerproperty string id at newtonsoft json serialization jsonserializerinternalreader createlist jsonreader reader type objecttype jsoncontract contract jsonproperty member object existingvalue string id at newtonsoft json serialization jsonserializerinternalreader setpropertyvalue jsonproperty property jsonconverter propertyconverter jsoncontainercontract containercontract jsonproperty containerproperty jsonreader reader object target at newtonsoft json serialization jsonserializerinternalreader populateobject object newobject jsonreader reader jsonobjectcontract contract jsonproperty member string id at newtonsoft json serialization jsonserializerinternalreader createobject jsonreader reader type objecttype jsoncontract contract jsonproperty member jsoncontainercontract containercontract jsonproperty containermember object existingvalue at newtonsoft json serialization jsonserializerinternalreader deserialize jsonreader reader type objecttype boolean checkadditionalcontent at newtonsoft json jsonserializer deserializeinternal jsonreader reader type objecttype at microsoft rest serialization safejsonconvert deserializeobject string json jsonserializersettings settings at microsoft azure management datafactory linkedservicesoperations listbyfactorywithhttpmessagesas ync d movenext exception newtonsoft json jsonreaderexception invocationinfo get line adf linkedservices get resourcegroupname resourcegroupname datafactoryname factoryname toarray position at c users vssadministrator documents windowspowershell modules azure datafactory tools public get adffromservice char dservices get resourcegroupname res historyid message cannot find a variable with the name adf folders stacktrace exception system management automation itemnotfoundexception invocationinfo get variable line if get variable adf folders erroraction silentlycontinue position at c users vssadministrator documents windowspowershell modules azure datafactory tools privat e adfobject class char if get variable adf folders erroraction silentlycontinue historyid
0
302,741
22,840,585,792
IssuesEvent
2022-07-12 21:21:33
dagger/dagger
https://api.github.com/repos/dagger/dagger
closed
Missing a guide to get started from scratch
area/documentation kind/dx
### What is the issue? From the repository I was trying to create a new project by following https://github.com/dagger/dagger/blob/v0.2.6/docs/learn/1003-get-started.md, thanks to @jpadams for pointing to the updated articles, however, the current articles does not cover the instructions to create a project from an empty directory as the deprecated article did. One option here would be add a "your first project" or something to the Getting Started or Guides section, with the same intention from the older learn/1003-get-started article, but with the content updated. Other option would be take advantage of the current "Migrate from Dagger 0.1" article and explain what is no longer there and how it was replaced, the downside of this approach is that someone new to dagger would ignore that article because they might be looking for a getting started article rather than a migration one. If it is considered relevant I can update the deprecated article to cover the new options available, just looking for a different point of view around this idea.
1.0
Missing a guide to get started from scratch - ### What is the issue? From the repository I was trying to create a new project by following https://github.com/dagger/dagger/blob/v0.2.6/docs/learn/1003-get-started.md, thanks to @jpadams for pointing to the updated articles, however, the current articles does not cover the instructions to create a project from an empty directory as the deprecated article did. One option here would be add a "your first project" or something to the Getting Started or Guides section, with the same intention from the older learn/1003-get-started article, but with the content updated. Other option would be take advantage of the current "Migrate from Dagger 0.1" article and explain what is no longer there and how it was replaced, the downside of this approach is that someone new to dagger would ignore that article because they might be looking for a getting started article rather than a migration one. If it is considered relevant I can update the deprecated article to cover the new options available, just looking for a different point of view around this idea.
non_main
missing a guide to get started from scratch what is the issue from the repository i was trying to create a new project by following thanks to jpadams for pointing to the updated articles however the current articles does not cover the instructions to create a project from an empty directory as the deprecated article did one option here would be add a your first project or something to the getting started or guides section with the same intention from the older learn get started article but with the content updated other option would be take advantage of the current migrate from dagger article and explain what is no longer there and how it was replaced the downside of this approach is that someone new to dagger would ignore that article because they might be looking for a getting started article rather than a migration one if it is considered relevant i can update the deprecated article to cover the new options available just looking for a different point of view around this idea
0
274,877
23,874,644,959
IssuesEvent
2022-09-07 17:49:00
redpanda-data/redpanda
https://api.github.com/repos/redpanda-data/redpanda
closed
A producer times out on init_producer_id
kind/bug area/tests area/transactions
### Version & Environment redpanda_0.0.0~20220831git8f05aa9-1_amd64.deb ### What went wrong? Chaos tests time out https://github.com/redpanda-data/redpanda-jepsen/issues/16. A producer times out on init_producer_id. ### How to reproduce the issue? Run tx-money chaos scenario.
1.0
A producer times out on init_producer_id - ### Version & Environment redpanda_0.0.0~20220831git8f05aa9-1_amd64.deb ### What went wrong? Chaos tests time out https://github.com/redpanda-data/redpanda-jepsen/issues/16. A producer times out on init_producer_id. ### How to reproduce the issue? Run tx-money chaos scenario.
non_main
a producer times out on init producer id version environment redpanda deb what went wrong chaos tests time out a producer times out on init producer id how to reproduce the issue run tx money chaos scenario
0
52,194
13,211,405,555
IssuesEvent
2020-08-15 22:54:50
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
[photospline] Modern C++ Issues (Trac #1831)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1831">https://code.icecube.wisc.edu/projects/icecube/ticket/1831</a>, reported by olivasand owned by jvansanten</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:38", "_ts": "1550067158057333", "description": "A few of the bots complain about linking photospline after the cpp11 switch was thrown.\n\nhttp://builds.icecube.wisc.edu/builders/OS%20X%20El%20Capitan/builds/851/steps/compile/logs/stdio", "reporter": "olivas", "cc": "", "resolution": "fixed", "time": "2016-08-19T18:31:51", "component": "combo reconstruction", "summary": "[photospline] Modern C++ Issues", "priority": "blocker", "keywords": "", "milestone": "", "owner": "jvansanten", "type": "defect" } ``` </p> </details>
1.0
[photospline] Modern C++ Issues (Trac #1831) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1831">https://code.icecube.wisc.edu/projects/icecube/ticket/1831</a>, reported by olivasand owned by jvansanten</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:38", "_ts": "1550067158057333", "description": "A few of the bots complain about linking photospline after the cpp11 switch was thrown.\n\nhttp://builds.icecube.wisc.edu/builders/OS%20X%20El%20Capitan/builds/851/steps/compile/logs/stdio", "reporter": "olivas", "cc": "", "resolution": "fixed", "time": "2016-08-19T18:31:51", "component": "combo reconstruction", "summary": "[photospline] Modern C++ Issues", "priority": "blocker", "keywords": "", "milestone": "", "owner": "jvansanten", "type": "defect" } ``` </p> </details>
non_main
modern c issues trac migrated from json status closed changetime ts description a few of the bots complain about linking photospline after the switch was thrown n n reporter olivas cc resolution fixed time component combo reconstruction summary modern c issues priority blocker keywords milestone owner jvansanten type defect
0
2,684
9,300,829,688
IssuesEvent
2019-03-23 16:53:36
RalfKoban/MiKo-Analyzers
https://api.github.com/repos/RalfKoban/MiKo-Analyzers
closed
Parameters shall not be reserved for future usage
Area: analyzer Area: maintainability feature next
Parameters whose description contains "Reserved for future usage.", "reserved" or "future" should not be used. See "DO NOT use reserved parameters. " in https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/parameter-design
True
Parameters shall not be reserved for future usage - Parameters whose description contains "Reserved for future usage.", "reserved" or "future" should not be used. See "DO NOT use reserved parameters. " in https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/parameter-design
main
parameters shall not be reserved for future usage parameters whose description contains reserved for future usage reserved or future should not be used see do not use reserved parameters in
1
3,564
14,269,538,521
IssuesEvent
2020-11-21 01:54:50
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
Cannot put Tooltip on OverflowMenu custom trigger icon
component: overflow-menu proposal: open status: waiting for maintainer response ๐Ÿ’ฌ type: bug ๐Ÿ›
## What package(s) are you using? - [ ] `carbon-components` - [x ] `carbon-components-react` ## Detailed description > Describe in detail the issue you're having. I cannot put a Tooltip on an OverflowMenu icon. > Is this issue related to a specific component? OverflowMenu and TooltipIcon > What did you expect to happen? What happened instead? What would you like to > see changed? I expected this to happen without errors: <img width="339" alt="Screen Shot 2020-11-12 at 1 51 56 AM" src="https://user-images.githubusercontent.com/15684622/98906284-cc809e00-248a-11eb-9275-0f0cce21e261.png"> Instead, I got this error in the console: ``` Warning: validateDOMNesting(...): <button> cannot appear as a descendant of <button>. ``` I would love to have the ability to put a Tooltip on an OverflowMenu button. > What browser are you working in? Chrome >My Code ``` <OverflowMenu style={{height: '3rem', maxHeight: '3rem', width: '3rem', background: '#f4f4f4'}} renderIcon={() => <TooltipIcon direction={"top"} align={"center"} style={{borderBottom: '0'}} tooltipText={"Status"}> <EventSchedule20 /> </TooltipIcon>} > {menuItems} </OverflowMenu> ``` Thank you very much for your time! Hopefully there's a workaround or a solution I'm missing.
True
Cannot put Tooltip on OverflowMenu custom trigger icon - ## What package(s) are you using? - [ ] `carbon-components` - [x ] `carbon-components-react` ## Detailed description > Describe in detail the issue you're having. I cannot put a Tooltip on an OverflowMenu icon. > Is this issue related to a specific component? OverflowMenu and TooltipIcon > What did you expect to happen? What happened instead? What would you like to > see changed? I expected this to happen without errors: <img width="339" alt="Screen Shot 2020-11-12 at 1 51 56 AM" src="https://user-images.githubusercontent.com/15684622/98906284-cc809e00-248a-11eb-9275-0f0cce21e261.png"> Instead, I got this error in the console: ``` Warning: validateDOMNesting(...): <button> cannot appear as a descendant of <button>. ``` I would love to have the ability to put a Tooltip on an OverflowMenu button. > What browser are you working in? Chrome >My Code ``` <OverflowMenu style={{height: '3rem', maxHeight: '3rem', width: '3rem', background: '#f4f4f4'}} renderIcon={() => <TooltipIcon direction={"top"} align={"center"} style={{borderBottom: '0'}} tooltipText={"Status"}> <EventSchedule20 /> </TooltipIcon>} > {menuItems} </OverflowMenu> ``` Thank you very much for your time! Hopefully there's a workaround or a solution I'm missing.
main
cannot put tooltip on overflowmenu custom trigger icon what package s are you using carbon components carbon components react detailed description describe in detail the issue you re having i cannot put a tooltip on an overflowmenu icon is this issue related to a specific component overflowmenu and tooltipicon what did you expect to happen what happened instead what would you like to see changed i expected this to happen without errors img width alt screen shot at am src instead i got this error in the console warning validatedomnesting cannot appear as a descendant of i would love to have the ability to put a tooltip on an overflowmenu button what browser are you working in chrome my code overflowmenu style height maxheight width background rendericon menuitems thank you very much for your time hopefully there s a workaround or a solution i m missing
1
619
4,112,681,570
IssuesEvent
2016-06-07 11:29:18
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
Bug report: Cannot fully uninstall Avast
awaiting maintainer feedback bug cask
### Description of issue It appears there's something wrong with the way Cask runs [Avast's](https://github.com/caskroom/homebrew-cask/blob/42437e5fcd93b1a3c95376794e6d5eee2b096160/Casks/avast.rb) `uninstall.sh`. Running: `sudo /Library/Application\ Support/Avast/hub/uninstall.sh` manually seems to do the trick, but it leaves some leftovers in `/opt/homebrew-cask/Caskroom/avast/` that include `com.avast.uninstall.app`. ### Output of `brew cask zap avast --verbose` ``` brew cask zap avast --verbose ==> Implied "brew cask uninstall avast" ==> Running uninstall process for avast; your password may be necessary ==> Running uninstall script /Library/Application Support/Avast/hub/uninstall.sh Error: Command failed to execute! ==> Failed command: ["/usr/bin/sudo", "-E", "--", "#<Pathname:/Library/Application Support/Avast/hub/uninstall.sh>"] ==> Output of failed command: ==> Exit status of failed command: #<Process::Status: pid 42237 exit 255> Error: Kernel.exit ``` ### Output of `brew doctor` ``` brew doctor Your system is ready to brew. ``` ### Output of `brew cask doctor` ``` brew cask doctor ==> OS X Release: 10.10 ==> OS X Release with Patchlevel: 10.10.5 ==> Hardware Architecture: intel-64 ==> Ruby Version: 2.0.0-p481 ==> Ruby Path: /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby ==> Homebrew Version: Homebrew 0.9.9 (git revision 2cd8; last commit 2016-06-05) Homebrew/homebrew-core (git revision 9fbd; last commit 2016-06-06) ==> Homebrew Executable Path: /usr/local/bin/brew ==> Homebrew Cellar Path: /usr/local/Cellar ==> Homebrew Repository Path: /usr/local ==> Homebrew Origin: https://github.com/Homebrew/brew.git ==> Homebrew-cask Version: 0.60.0 (git revision da17c; last commit 7 hours ago) ==> Homebrew-cask Install Location: <NONE> ==> Homebrew-cask Staging Location: /opt/homebrew-cask/Caskroom ==> Homebrew-cask Cached Downloads: /Users/designorant/Library/Caches/Homebrew /Users/designorant/Library/Caches/Homebrew/Casks 2 files, 167.5M (warning: run "brew cask cleanup") ==> Homebrew-cask Default Tap Path: /usr/local/Library/Taps/caskroom/homebrew-cask ==> Homebrew-cask Alternate Cask Taps: /usr/local/Library/Taps/caskroom/homebrew-versions ==> Homebrew-cask Default Tap Cask Count: 3198 ==> Contents of $LOAD_PATH: /usr/local/Library/Taps/caskroom/homebrew-cask/lib /usr/local/Library/Homebrew /Library/Ruby/Site/2.0.0 /Library/Ruby/Site/2.0.0/x86_64-darwin14 /Library/Ruby/Site/2.0.0/universal-darwin14 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin14 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin14 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin14 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin14 ==> Contents of $RUBYLIB Environment Variable: <NONE> ==> Contents of $RUBYOPT Environment Variable: <NONE> ==> Contents of $RUBYPATH Environment Variable: <NONE> ==> Contents of $RBENV_VERSION Environment Variable: <NONE> ==> Contents of $CHRUBY_VERSION Environment Variable: <NONE> ==> Contents of $GEM_HOME Environment Variable: <NONE> ==> Contents of $GEM_PATH Environment Variable: <NONE> ==> Contents of $BUNDLE_PATH Environment Variable: <NONE> ==> Contents of $PATH Environment Variable: PATH="/usr/local/opt/rbenv/shims:/usr/local/opt/rbenv/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Library/Taps/caskroom/homebrew-cask/cmd:/usr/local/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Library/ENV/scm" ==> Contents of $SHELL Environment Variable: SHELL="/bin/zsh" ==> Contents of Locale Environment Variables: LANG="en_GB.UTF-8" LC_CTYPE="en_GB.UTF-8" ==> Running As Privileged User: No ```
True
Bug report: Cannot fully uninstall Avast - ### Description of issue It appears there's something wrong with the way Cask runs [Avast's](https://github.com/caskroom/homebrew-cask/blob/42437e5fcd93b1a3c95376794e6d5eee2b096160/Casks/avast.rb) `uninstall.sh`. Running: `sudo /Library/Application\ Support/Avast/hub/uninstall.sh` manually seems to do the trick, but it leaves some leftovers in `/opt/homebrew-cask/Caskroom/avast/` that include `com.avast.uninstall.app`. ### Output of `brew cask zap avast --verbose` ``` brew cask zap avast --verbose ==> Implied "brew cask uninstall avast" ==> Running uninstall process for avast; your password may be necessary ==> Running uninstall script /Library/Application Support/Avast/hub/uninstall.sh Error: Command failed to execute! ==> Failed command: ["/usr/bin/sudo", "-E", "--", "#<Pathname:/Library/Application Support/Avast/hub/uninstall.sh>"] ==> Output of failed command: ==> Exit status of failed command: #<Process::Status: pid 42237 exit 255> Error: Kernel.exit ``` ### Output of `brew doctor` ``` brew doctor Your system is ready to brew. ``` ### Output of `brew cask doctor` ``` brew cask doctor ==> OS X Release: 10.10 ==> OS X Release with Patchlevel: 10.10.5 ==> Hardware Architecture: intel-64 ==> Ruby Version: 2.0.0-p481 ==> Ruby Path: /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby ==> Homebrew Version: Homebrew 0.9.9 (git revision 2cd8; last commit 2016-06-05) Homebrew/homebrew-core (git revision 9fbd; last commit 2016-06-06) ==> Homebrew Executable Path: /usr/local/bin/brew ==> Homebrew Cellar Path: /usr/local/Cellar ==> Homebrew Repository Path: /usr/local ==> Homebrew Origin: https://github.com/Homebrew/brew.git ==> Homebrew-cask Version: 0.60.0 (git revision da17c; last commit 7 hours ago) ==> Homebrew-cask Install Location: <NONE> ==> Homebrew-cask Staging Location: /opt/homebrew-cask/Caskroom ==> Homebrew-cask Cached Downloads: /Users/designorant/Library/Caches/Homebrew /Users/designorant/Library/Caches/Homebrew/Casks 2 files, 167.5M (warning: run "brew cask cleanup") ==> Homebrew-cask Default Tap Path: /usr/local/Library/Taps/caskroom/homebrew-cask ==> Homebrew-cask Alternate Cask Taps: /usr/local/Library/Taps/caskroom/homebrew-versions ==> Homebrew-cask Default Tap Cask Count: 3198 ==> Contents of $LOAD_PATH: /usr/local/Library/Taps/caskroom/homebrew-cask/lib /usr/local/Library/Homebrew /Library/Ruby/Site/2.0.0 /Library/Ruby/Site/2.0.0/x86_64-darwin14 /Library/Ruby/Site/2.0.0/universal-darwin14 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin14 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin14 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin14 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin14 ==> Contents of $RUBYLIB Environment Variable: <NONE> ==> Contents of $RUBYOPT Environment Variable: <NONE> ==> Contents of $RUBYPATH Environment Variable: <NONE> ==> Contents of $RBENV_VERSION Environment Variable: <NONE> ==> Contents of $CHRUBY_VERSION Environment Variable: <NONE> ==> Contents of $GEM_HOME Environment Variable: <NONE> ==> Contents of $GEM_PATH Environment Variable: <NONE> ==> Contents of $BUNDLE_PATH Environment Variable: <NONE> ==> Contents of $PATH Environment Variable: PATH="/usr/local/opt/rbenv/shims:/usr/local/opt/rbenv/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Library/Taps/caskroom/homebrew-cask/cmd:/usr/local/Library/Taps/homebrew/homebrew-services/cmd:/usr/local/Library/ENV/scm" ==> Contents of $SHELL Environment Variable: SHELL="/bin/zsh" ==> Contents of Locale Environment Variables: LANG="en_GB.UTF-8" LC_CTYPE="en_GB.UTF-8" ==> Running As Privileged User: No ```
main
bug report cannot fully uninstall avast description of issue it appears there s something wrong with the way cask runs uninstall sh running sudo library application support avast hub uninstall sh manually seems to do the trick but it leaves some leftovers in opt homebrew cask caskroom avast that include com avast uninstall app output of brew cask zap avast verbose brew cask zap avast verbose implied brew cask uninstall avast running uninstall process for avast your password may be necessary running uninstall script library application support avast hub uninstall sh error command failed to execute failed command output of failed command exit status of failed command error kernel exit output of brew doctor brew doctor your system is ready to brew output of brew cask doctor brew cask doctor os x release os x release with patchlevel hardware architecture intel ruby version ruby path system library frameworks ruby framework versions usr bin ruby homebrew version homebrew git revision last commit homebrew homebrew core git revision last commit homebrew executable path usr local bin brew homebrew cellar path usr local cellar homebrew repository path usr local homebrew origin homebrew cask version git revision last commit hours ago homebrew cask install location homebrew cask staging location opt homebrew cask caskroom homebrew cask cached downloads users designorant library caches homebrew users designorant library caches homebrew casks files warning run brew cask cleanup homebrew cask default tap path usr local library taps caskroom homebrew cask homebrew cask alternate cask taps usr local library taps caskroom homebrew versions homebrew cask default tap cask count contents of load path usr local library taps caskroom homebrew cask lib usr local library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal contents of rubylib environment variable contents of rubyopt environment variable contents of rubypath environment variable contents of rbenv version environment variable contents of chruby version environment variable contents of gem home environment variable contents of gem path environment variable contents of bundle path environment variable contents of path environment variable path usr local opt rbenv shims usr local opt rbenv bin usr local bin usr bin bin usr sbin sbin usr local library taps caskroom homebrew cask cmd usr local library taps homebrew homebrew services cmd usr local library env scm contents of shell environment variable shell bin zsh contents of locale environment variables lang en gb utf lc ctype en gb utf running as privileged user no
1
1,028
4,822,175,623
IssuesEvent
2016-11-05 18:25:15
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
pip does not working with virtualenv
affects_2.2 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> pip ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` 2.2.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> None ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say โ€œN/Aโ€ for anything that is not platform-specific. --> Debian Jessie ##### SUMMARY <!--- Explain the problem briefly --> PIP does not work with pyvenv-3.5 after update to ansible 2.2 ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> Task ``` - name: Update setuptools and pip become: true become_user: "www-data" pip: name={{ item }} state=latest virtualenv=/www/env/site virtualenv_command=pyvenv-3.5 with_items: - pip - setuptools - wheel ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Successful update of pip and others in /www/env/site virtualenv ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Fails on update system pip2 <!--- Paste verbatim command output between quotes below --> ``` failed: [test] (item=setuptools) => {"cmd": "/usr/local/bin/pip2 install -U setuptools", "failed": true, "item": "setuptools", "msg": "stdout: Collecting setuptools\n Using cached setuptools-28.7.1-py2.py3-none-any.whl\nInstalling collec ted packages: setuptools\n Found existing installation: setuptools 25.1.6\n Uninstalling setuptools-25.1.6:\n\n:stderr: Exception:\nTraceback (most recent call last):\n File \"/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.eg g/pip/basecommand.py\", line 215, in main\n status = self.run(options, args)\n File \"/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py\", line 317, in run\n prefix=options.prefix_path,\n File \"/u sr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py\", line 736, in install\n requirement.uninstall(auto_confirm=True)\n File \"/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py\ ", line 742, in uninstall\n paths_to_remove.remove(auto_confirm)\n File \"/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py\", line 115, in remove\n renames(path, new_path)\n File \"/usr/local/lib /python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py\", line 267, in renames\n shutil.move(old, new)\n File \"/usr/lib/python2.7/shutil.py\", line 303, in move\n os.unlink(src)\nOSError: [Errno 13] Permission denied: '/usr/local/bin/easy_install'\n"} ```
True
pip does not working with virtualenv - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> pip ##### ANSIBLE VERSION <!--- Paste verbatim output from โ€œansible --versionโ€ between quotes below --> ``` 2.2.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> None ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say โ€œN/Aโ€ for anything that is not platform-specific. --> Debian Jessie ##### SUMMARY <!--- Explain the problem briefly --> PIP does not work with pyvenv-3.5 after update to ansible 2.2 ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> Task ``` - name: Update setuptools and pip become: true become_user: "www-data" pip: name={{ item }} state=latest virtualenv=/www/env/site virtualenv_command=pyvenv-3.5 with_items: - pip - setuptools - wheel ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Successful update of pip and others in /www/env/site virtualenv ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> Fails on update system pip2 <!--- Paste verbatim command output between quotes below --> ``` failed: [test] (item=setuptools) => {"cmd": "/usr/local/bin/pip2 install -U setuptools", "failed": true, "item": "setuptools", "msg": "stdout: Collecting setuptools\n Using cached setuptools-28.7.1-py2.py3-none-any.whl\nInstalling collec ted packages: setuptools\n Found existing installation: setuptools 25.1.6\n Uninstalling setuptools-25.1.6:\n\n:stderr: Exception:\nTraceback (most recent call last):\n File \"/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.eg g/pip/basecommand.py\", line 215, in main\n status = self.run(options, args)\n File \"/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py\", line 317, in run\n prefix=options.prefix_path,\n File \"/u sr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py\", line 736, in install\n requirement.uninstall(auto_confirm=True)\n File \"/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py\ ", line 742, in uninstall\n paths_to_remove.remove(auto_confirm)\n File \"/usr/local/lib/python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/req/req_uninstall.py\", line 115, in remove\n renames(path, new_path)\n File \"/usr/local/lib /python2.7/dist-packages/pip-8.1.2-py2.7.egg/pip/utils/__init__.py\", line 267, in renames\n shutil.move(old, new)\n File \"/usr/lib/python2.7/shutil.py\", line 303, in move\n os.unlink(src)\nOSError: [Errno 13] Permission denied: '/usr/local/bin/easy_install'\n"} ```
main
pip does not working with virtualenv issue type bug report component name pip ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say โ€œn aโ€ for anything that is not platform specific debian jessie summary pip does not work with pyvenv after update to ansible steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used task name update setuptools and pip become true become user www data pip name item state latest virtualenv www env site virtualenv command pyvenv with items pip setuptools wheel expected results successful update of pip and others in www env site virtualenv actual results fails on update system failed item setuptools cmd usr local bin install u setuptools failed true item setuptools msg stdout collecting setuptools n using cached setuptools none any whl ninstalling collec ted packages setuptools n found existing installation setuptools n uninstalling setuptools n n stderr exception ntraceback most recent call last n file usr local lib dist packages pip eg g pip basecommand py line in main n status self run options args n file usr local lib dist packages pip egg pip commands install py line in run n prefix options prefix path n file u sr local lib dist packages pip egg pip req req set py line in install n requirement uninstall auto confirm true n file usr local lib dist packages pip egg pip req req install py line in uninstall n paths to remove remove auto confirm n file usr local lib dist packages pip egg pip req req uninstall py line in remove n renames path new path n file usr local lib dist packages pip egg pip utils init py line in renames n shutil move old new n file usr lib shutil py line in move n os unlink src noserror permission denied usr local bin easy install n
1
232,049
18,841,695,225
IssuesEvent
2021-11-11 10:18:26
cortexproject/cortex
https://api.github.com/repos/cortexproject/cortex
closed
Flaky TestDeleteSeriesAllIndexBackends
type/flaky-test stale
**Describe the bug** The integration test `TestDeleteSeriesAllIndexBackends` looks flaky: ``` === RUN TestDeleteSeriesAllIndexBackends 17:26:51 Starting cassandra 17:26:52 Ports for container: e2e-cortex-test-cassandra Mapping: map[9042:32936] 17:26:52 Starting dynamodb 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns; 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.advanceAllocatingFrom (Lorg/apache/cassandra/db/commitlog/CommitLogSegment;)V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stop ()V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes (JJIJ[J)V 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt (JI)[B 17:26:54 Ports for container: e2e-cortex-test-dynamodb Mapping: map[8000:32937] 17:26:54 Starting bigtable 17:26:54 dynamodb: Initializing DynamoDB Local with the following configuration: 17:26:54 dynamodb: Port: 8000 17:26:54 dynamodb: InMemory: true 17:26:54 dynamodb: DbPath: null 17:26:54 dynamodb: SharedDb: true 17:26:54 dynamodb: shouldDelayTransientStatuses: false 17:26:54 dynamodb: CorsParams: * 17:26:55 cassandra: INFO [main] 2020-10-28 17:26:55,542 YamlConfigurationLoader.java:89 - Configuration location: file:/etc/cassandra/cassandra.yaml 17:26:55 bigtable: Bigtable emulator running on [::]:9035 17:26:56 Ports for container: e2e-cortex-test-bigtable Mapping: map[9035:32938] 17:26:56 Starting consul 17:26:56 cassandra: INFO [main] 2020-10-28 17:26:56,870 Config.java:481 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=192.168.48.2; broadcast_rpc_address=192.168.48.2; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=/var/lib/cassandra/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@37e547da; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=SimpleSnitch; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=null; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=0; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=dc; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=192.168.48.2; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=1; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=/var/lib/cassandra/saved_caches; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=192.168.48.2}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@2b6856dd; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000] 17:26:56 cassandra: INFO [main] 2020-10-28 17:26:56,871 DatabaseDescriptor.java:366 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap 17:26:56 cassandra: INFO [main] 2020-10-28 17:26:56,872 DatabaseDescriptor.java:420 - Global memtable on-heap threshold is enabled at 462MB 17:26:56 cassandra: INFO [main] 2020-10-28 17:26:56,872 DatabaseDescriptor.java:424 - Global memtable off-heap threshold is enabled at 462MB 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,151 RateBasedBackPressure.java:123 - Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000. 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,152 DatabaseDescriptor.java:710 - Back-pressure is disabled with strategy org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}. 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,373 JMXServerUtils.java:249 - Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7199/jmxrmi 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,382 CassandraDaemon.java:471 - Hostname: cassandra 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,382 CassandraDaemon.java:478 - JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_131 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,384 CassandraDaemon.java:479 - Heap size: 1.805GiB/1.805GiB 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,385 CassandraDaemon.java:484 - Code Cache Non-heap memory: init = 2555904(2496K) used = 4501952(4396K) committed = 4521984(4416K) max = 251658240(245760K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,386 CassandraDaemon.java:484 - Metaspace Non-heap memory: init = 0(0K) used = 17549904(17138K) committed = 18219008(17792K) max = -1(-1K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,387 CassandraDaemon.java:484 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2090424(2041K) committed = 2228224(2176K) max = 1073741824(1048576K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,388 CassandraDaemon.java:484 - Par Eden Space Heap memory: init = 167772160(163840K) used = 94023544(91819K) committed = 167772160(163840K) max = 167772160(163840K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,389 CassandraDaemon.java:484 - Par Survivor Space Heap memory: init = 20971520(20480K) used = 0(0K) committed = 20971520(20480K) max = 20971520(20480K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,390 CassandraDaemon.java:484 - CMS Old Gen Heap memory: init = 1749024768(1708032K) used = 0(0K) committed = 1749024768(1708032K) max = 1749024768(1708032K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,391 CassandraDaemon.java:486 - Classpath: /etc/cassandra:/usr/share/cassandra/lib/HdrHistogram-2.1.9.jar:/usr/share/cassandra/lib/ST4-4.0.8.jar:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/asm-5.0.4.jar:/usr/share/cassandra/lib/caffeine-2.2.6.jar:/usr/share/cassandra/lib/cassandra-driver-core-3.0.1-shaded.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.9.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrent-trees-2.4.0.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/ecj-4.4.2.jar:/usr/share/cassandra/lib/guava-18.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/hppc-0.5.4.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.3.0.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jcl-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/jctools-core-1.2.1.jar:/usr/share/cassandra/lib/jflex-1.6.0.jar:/usr/share/cassandra/lib/jna-4.4.0.jar:/usr/share/cassandra/lib/joda-time-2.4.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/jstackjunit-0.0.1.jar:/usr/share/cassandra/lib/libthrift-0.9.2.jar:/usr/share/cassandra/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/logback-classic-1.1.3.jar:/usr/share/cassandra/lib/logback-core-1.1.3.jar:/usr/share/cassandra/lib/lz4-1.3.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.0.jar:/usr/share/cassandra/lib/metrics-jvm-3.1.0.jar:/usr/share/cassandra/lib/metrics-logback-3.1.0.jar:/usr/share/cassandra/lib/netty-all-4.0.44.Final.jar:/usr/share/cassandra/lib/ohc-core-0.4.4.jar:/usr/share/cassandra/lib/ohc-core-j8-0.4.4.jar:/usr/share/cassandra/lib/reporter-config-base-3.0.3.jar:/usr/share/cassandra/lib/reporter-config3-3.0.3.jar:/usr/share/cassandra/lib/sigar-1.6.4.jar:/usr/share/cassandra/lib/slf4j-api-1.7.7.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.1.1.7.jar:/usr/share/cassandra/lib/snowball-stemmer-1.3.0.581.1.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-3.11.0.jar:/usr/share/cassandra/apache-cassandra-thrift-3.11.0.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.3.0.jar 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,392 CassandraDaemon.java:488 - JVM Arguments: [-Xloggc:/var/log/cassandra/gc.log, -ea, -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, -XX:+HeapDumpOnOutOfMemoryError, -Xss256k, -XX:StringTableSize=1000003, -XX:+AlwaysPreTouch, -XX:-UseBiasedLocking, -XX:+UseTLAB, -XX:+ResizeTLAB, -XX:+UseNUMA, -XX:+PerfDisableSharedMem, -Djava.net.preferIPv4Stack=true, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:CMSWaitDuration=10000, -XX:+CMSParallelInitialMarkEnabled, -XX:+CMSEdenChunksRecordAlways, -XX:+CMSClassUnloadingEnabled, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, -Xms1868M, -Xmx1868M, -Xmn200M, -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler, -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar, -Dcassandra.jmx.local.port=7199, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password, -Djava.library.path=/usr/share/cassandra/lib/sigar-bin, -Dcassandra.initial_token=0, -Dcassandra.skip_wait_for_gossip_to_settle=0, -Dcassandra.libjemalloc=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=/var/log/cassandra, -Dcassandra.storagedir=/var/lib/cassandra, -Dcassandra-foreground=yes] 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,578 NativeLibrary.java:187 - Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root. 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,579 StartupChecks.java:131 - jemalloc seems to be preloaded from /usr/lib/x86_64-linux-gnu/libjemalloc.so.1 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,580 StartupChecks.java:160 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info. 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,580 StartupChecks.java:197 - OpenJDK is not recommended. Please upgrade to the newest Oracle Java release 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,582 SigarLibrary.java:44 - Initializing SIGAR library 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,620 SigarLibrary.java:180 - Checked OS settings and found them configured for optimal performance. 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,632 StartupChecks.java:265 - Maximum number of memory map areas per process (vm.max_map_count) 65530 is too low, recommended value: 1048575, you can change it with sysctl. 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,687 StartupChecks.java:286 - Directory /var/lib/cassandra/data doesn't exist 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,701 StartupChecks.java:286 - Directory /var/lib/cassandra/commitlog doesn't exist 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,702 StartupChecks.java:286 - Directory /var/lib/cassandra/saved_caches doesn't exist 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,709 StartupChecks.java:286 - Directory /var/lib/cassandra/hints doesn't exist 17:26:57 Ports for container: e2e-cortex-test-consul Mapping: map[8500:32939] 17:26:57 consul: ==> Starting Consul agent... 17:26:57 consul: Version: '1.8.4' 17:26:57 consul: Node ID: 'd7e32fc8-ba7c-3343-1de9-6a94a903d1fd' 17:26:57 consul: Node name: 'consul' 17:26:57 consul: Datacenter: 'dc1' (Segment: '<all>') 17:26:57 consul: Server: true (Bootstrap: false) 17:26:57 consul: Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600) 17:26:57 consul: Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302) 17:26:57 consul: Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false 17:26:57 consul: ==> Log data will now stream in as it occurs: 17:26:57 consul: ==> Consul agent running! 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,880 QueryProcessor.java:115 - Initialized prepared statement caches with 10 MB (native) and 10 MB (Thrift) 17:26:58 cassandra: INFO [main] 2020-10-28 17:26:58,875 ColumnFamilyStore.java:406 - Initializing system.IndexInfo 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,085 ColumnFamilyStore.java:406 - Initializing system.batches 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,092 ColumnFamilyStore.java:406 - Initializing system.paxos 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,105 ColumnFamilyStore.java:406 - Initializing system.local 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,121 ColumnFamilyStore.java:406 - Initializing system.peers 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,138 ColumnFamilyStore.java:406 - Initializing system.peer_events 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,152 ColumnFamilyStore.java:406 - Initializing system.range_xfers 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,162 ColumnFamilyStore.java:406 - Initializing system.compaction_history 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,174 ColumnFamilyStore.java:406 - Initializing system.sstable_activity 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,187 ColumnFamilyStore.java:406 - Initializing system.size_estimates 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,197 ColumnFamilyStore.java:406 - Initializing system.available_ranges 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,211 ColumnFamilyStore.java:406 - Initializing system.transferred_ranges 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,223 ColumnFamilyStore.java:406 - Initializing system.views_builds_in_progress 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,233 ColumnFamilyStore.java:406 - Initializing system.built_views 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,243 ColumnFamilyStore.java:406 - Initializing system.hints 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,252 ColumnFamilyStore.java:406 - Initializing system.batchlog 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,259 ColumnFamilyStore.java:406 - Initializing system.prepared_statements 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,265 ColumnFamilyStore.java:406 - Initializing system.schema_keyspaces 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,273 ColumnFamilyStore.java:406 - Initializing system.schema_columnfamilies 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,281 ColumnFamilyStore.java:406 - Initializing system.schema_columns 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,290 ColumnFamilyStore.java:406 - Initializing system.schema_triggers 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,299 ColumnFamilyStore.java:406 - Initializing system.schema_usertypes 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,308 ColumnFamilyStore.java:406 - Initializing system.schema_functions 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,320 ColumnFamilyStore.java:406 - Initializing system.schema_aggregates 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,322 ViewManager.java:137 - Not submitting build tasks for views in keyspace system as storage service is not initialized 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,431 ApproximateTime.java:44 - Scheduling approximate time-check task with a precision of 10 milliseconds 17:27:02 cassandra: INFO [MemtableFlushWriter:1] 2020-10-28 17:27:02,344 CacheService.java:112 - Initializing key cache with capacity of 92 MBs. 17:27:02 cassandra: INFO [MemtableFlushWriter:1] 2020-10-28 17:27:02,356 CacheService.java:134 - Initializing row cache with capacity of 0 MBs 17:27:02 cassandra: INFO [MemtableFlushWriter:1] 2020-10-28 17:27:02,359 CacheService.java:163 - Initializing counter cache with capacity of 46 MBs 17:27:02 cassandra: INFO [MemtableFlushWriter:1] 2020-10-28 17:27:02,361 CacheService.java:174 - Scheduling counter cache save to every 7200 seconds (going to save all keys). 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,716 StorageService.java:599 - Populating token metadata from system tables 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,771 BufferPool.java:230 - Global buffer pool is enabled, when pool is exhausted (max is 462.000MiB) it will allocate on heap 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,952 StorageService.java:606 - Token metadata: 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,984 ColumnFamilyStore.java:406 - Initializing system_schema.keyspaces 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,994 ColumnFamilyStore.java:406 - Initializing system_schema.tables 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,010 ColumnFamilyStore.java:406 - Initializing system_schema.columns 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,033 ColumnFamilyStore.java:406 - Initializing system_schema.triggers 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,052 ColumnFamilyStore.java:406 - Initializing system_schema.dropped_columns 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,059 ColumnFamilyStore.java:406 - Initializing system_schema.views 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,067 ColumnFamilyStore.java:406 - Initializing system_schema.types 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,077 ColumnFamilyStore.java:406 - Initializing system_schema.functions 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,094 ColumnFamilyStore.java:406 - Initializing system_schema.aggregates 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,108 ColumnFamilyStore.java:406 - Initializing system_schema.indexes 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,115 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_schema as storage service is not initialized 17:27:03 cassandra: INFO [pool-3-thread-1] 2020-10-28 17:27:03,180 AutoSavingCache.java:173 - Completed loading (13 ms; 4 keys) KeyCache cache 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,238 CommitLog.java:152 - No commitlog files found; skipping replay 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,238 StorageService.java:599 - Populating token metadata from system tables 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,292 StorageService.java:606 - Token metadata: 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,567 QueryProcessor.java:162 - Preloaded 0 prepared statements 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,569 StorageService.java:617 - Cassandra version: 3.11.0 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,569 StorageService.java:618 - Thrift API version: 20.1.0 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,570 StorageService.java:619 - CQL supported versions: 3.4.4 (default: 3.4.4) 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,570 StorageService.java:621 - Native protocol supported versions: 3/v3, 4/v4, 5/v5-beta (default: 4/v4) 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,650 IndexSummaryManager.java:85 - Initializing index summary manager with a memory pool size of 92 MB and a resize interval of 60 minutes 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,679 MessagingService.java:753 - Starting Messaging Service on /192.168.48.2:7000 (eth0) 17:27:03 cassandra: WARN [main] 2020-10-28 17:27:03,699 SystemKeyspace.java:1083 - No host ID found, created 809ce138-4196-424c-bdf8-b6b79111fe01 (Note: This should happen exactly once per node). 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,792 StorageService.java:706 - Loading persisted ring state 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,816 StorageService.java:819 - Starting up server gossip 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,975 StorageService.java:857 - This node will not auto bootstrap because it is configured to be a seed node. 17:27:04 cassandra: INFO [main] 2020-10-28 17:27:04,015 StorageService.java:984 - Saved tokens not found. Using configuration value: [0] 17:27:04 cassandra: INFO [main] 2020-10-28 17:27:04,040 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=system_traces, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=2}}, tables=[org.apache.cassandra.config.CFMetaData@e3b3b75[cfId=c5e99f16-8677-3914-b17e-960613512345,ksName=system_traces,cfName=sessions,flags=[COMPOUND],params=TableParams{comment=tracing sessions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=0, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [client command coordinator duration request started_at parameters]],partitionKeyColumns=[session_id],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UUIDType,columnMetadata=[client, command, session_id, coordinator, request, started_at, duration, parameters],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@54f732c4[cfId=8826e8e9-e16a-3728-8753-3bc1fc713c25,ksName=system_traces,cfName=events,flags=[COMPOUND],params=TableParams{comment=tracing events, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=0, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.TimeUUIDType),partitionColumns=[[] | [activity source source_elapsed thread]],partitionKeyColumns=[session_id],clusteringColumns=[event_id],keyValidator=org.apache.cassandra.db.marshal.UUIDType,columnMetadata=[activity, event_id, session_id, source, thread, source_elapsed],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]} 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,597 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_traces as storage service is not initialized 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,658 ColumnFamilyStore.java:406 - Initializing system_traces.events 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,701 ColumnFamilyStore.java:406 - Initializing system_traces.sessions 17:27:04 cassandra: INFO [main] 2020-10-28 17:27:04,790 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=system_distributed, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=3}}, tables=[org.apache.cassandra.config.CFMetaData@62103d37[cfId=759fffad-624b-3181-80ee-fa9a52d1f627,ksName=system_distributed,cfName=repair_history,flags=[COMPOUND],params=TableParams{comment=Repair history, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.TimeUUIDType),partitionColumns=[[] | [coordinator exception_message exception_stacktrace finished_at parent_id range_begin range_end started_at status participants]],partitionKeyColumns=[keyspace_name, columnfamily_name],clusteringColumns=[id],keyValidator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),columnMetadata=[status, id, coordinator, finished_at, participants, exception_stacktrace, parent_id, range_end, range_begin, exception_message, keyspace_name, started_at, columnfamily_name],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@53493c07[cfId=deabd734-b99d-3b9c-92e5-fd92eb5abf14,ksName=system_distributed,cfName=parent_repair_history,flags=[COMPOUND],params=TableParams{comment=Repair history, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [exception_message exception_stacktrace finished_at keyspace_name started_at columnfamily_names options requested_ranges successful_ranges]],partitionKeyColumns=[parent_id],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.TimeUUIDType,columnMetadata=[requested_ranges, exception_message, keyspace_name, successful_ranges, started_at, finished_at, options, exception_stacktrace, parent_id, columnfamily_names],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@550fa2cc[cfId=5582b59f-8e4e-35e1-b913-3acada51eb04,ksName=system_distributed,cfName=view_build_status,flags=[COMPOUND],params=TableParams{comment=Materialized View build status, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UUIDType),partitionColumns=[[] | [status]],partitionKeyColumns=[keyspace_name, view_name],clusteringColumns=[host_id],keyValidator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),columnMetadata=[view_name, status, keyspace_name, host_id],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]} 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,974 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_distributed as storage service is not initialized 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,982 ColumnFamilyStore.java:406 - Initializing system_distributed.parent_repair_history 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,990 ColumnFamilyStore.java:406 - Initializing system_distributed.repair_history 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,998 ColumnFamilyStore.java:406 - Initializing system_distributed.view_build_status 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,051 StorageService.java:1439 - JOINING: Finish joining ring 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,191 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=system_auth, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=1}}, tables=[org.apache.cassandra.config.CFMetaData@50a18b03[cfId=5bc52802-de25-35ed-aeab-188eecebb090,ksName=system_auth,cfName=roles,flags=[COMPOUND],params=TableParams{comment=role definitions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [can_login is_superuser salted_hash member_of]],partitionKeyColumns=[role],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[salted_hash, member_of, role, can_login, is_superuser],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@4c3a5a64[cfId=0ecdaa87-f8fb-3e60-88d1-74fb36fe5c0d,ksName=system_auth,cfName=role_members,flags=[COMPOUND],params=TableParams{comment=role memberships lookup table, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | []],partitionKeyColumns=[role],clusteringColumns=[member],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[role, member],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@721639c8[cfId=3afbe79f-2194-31a7-add7-f5ab90d8ec9c,ksName=system_auth,cfName=role_permissions,flags=[COMPOUND],params=TableParams{comment=permissions granted to db roles, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | [permissions]],partitionKeyColumns=[role],clusteringColumns=[resource],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[role, resource, permissions],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@22f0a26c[cfId=5f2fbdad-91f1-3946-bd25-d5da3a5c35ec,ksName=system_auth,cfName=resource_role_permissons_index,flags=[COMPOUND],params=TableParams{comment=index of db roles with permissions granted on a resource, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | []],partitionKeyColumns=[resource],clusteringColumns=[role],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[resource, role],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]} 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,320 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_auth as storage service is not initialized 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,325 ColumnFamilyStore.java:406 - Initializing system_auth.resource_role_permissons_index 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,330 ColumnFamilyStore.java:406 - Initializing system_auth.role_members 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,336 ColumnFamilyStore.java:406 - Initializing system_auth.role_permissions 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,341 ColumnFamilyStore.java:406 - Initializing system_auth.roles 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,481 NativeTransportService.java:70 - Netty using native Epoll event loop 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,549 Server.java:155 - Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a] 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,550 Server.java:156 - Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)... 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,607 CassandraDaemon.java:527 - Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it 17:27:07 Starting table-manager 17:27:08 Ports for container: e2e-cortex-test-table-manager Mapping: map[80:32941 9095:32940] 17:27:12 Stopping table-manager 17:27:13 Starting table-manager 17:27:14 Ports for container: e2e-cortex-test-table-manager Mapping: map[80:32943 9095:32942] 17:27:14 Stopping table-manager 17:27:14 Starting table-manager 17:27:15 cassandra: INFO [OptionalTasks:1] 2020-10-28 17:27:15,635 CassandraRoleManager.java:355 - Created default superuser role 'cassandra' 17:27:15 Ports for container: e2e-cortex-test-table-manager Mapping: map[80:32945 9095:32944] 17:27:15 table-manager: level=error ts=2020-10-28T17:27:15.884067436Z caller=connectionpool.go:523 module=gocql client=table-manager msg="failed to connect" address=192.168.48.2:9042 error="Keyspace 'tests' does not exist" 17:27:16 table-manager: level=error ts=2020-10-28T17:27:16.011569008Z caller=connectionpool.go:523 module=gocql client=table-manager msg="failed to connect" address=192.168.48.2:9042 error="Keyspace 'tests' does not exist" 17:27:16 cassandra: INFO [Native-Transport-Requests-3] 2020-10-28 17:27:16,118 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=tests, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=1}}, tables=[], views=[], functions=[], types=[]} 17:27:16 cassandra: INFO [Native-Transport-Requests-2] 2020-10-28 17:27:16,361 MigrationManager.java:355 - Create new table: org.apache.cassandra.config.CFMetaData@24d9f512[cfId=d3024790-1942-11eb-a22c-7f6011a90b8b,ksName=tests,cfName=cortex_2650,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.BytesType),partitionColumns=[[] | [value]],partitionKeyColumns=[hash],clusteringColumns=[range],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[hash, range, value],droppedColumns={},triggers=[],indexes=[]] 17:27:16 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:16,512 ColumnFamilyStore.java:406 - Initializing tests.cortex_2650 17:27:16 cassandra: INFO [Native-Transport-Requests-2] 2020-10-28 17:27:16,560 MigrationManager.java:355 - Create new table: org.apache.cassandra.config.CFMetaData@533764ba[cfId=d320a500-1942-11eb-a22c-7f6011a90b8b,ksName=tests,cfName=cortex_2651,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.BytesType),partitionColumns=[[] | [value]],partitionKeyColumns=[hash],clusteringColumns=[range],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[hash, range, value],droppedColumns={},triggers=[],indexes=[]] 17:27:16 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:16,698 ColumnFamilyStore.java:406 - Initializing tests.cortex_2651 17:27:16 cassandra: INFO [Native-Transport-Requests-3] 2020-10-28 17:27:16,725 MigrationManager.java:355 - Create new table: org.apache.cassandra.config.CFMetaData@2884a214[cfId=d339ab40-1942-11eb-a22c-7f6011a90b8b,ksName=tests,cfName=cortex_chunks_2650,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.BytesType),partitionColumns=[[] | [value]],partitionKeyColumns=[hash],clusteringColumns=[range],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[hash, range, value],droppedColumns={},triggers=[],indexes=[]] 17:27:16 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:16,862 ColumnFamilyStore.java:406 - Initializing tests.cortex_chunks_2650 17:27:16 cassandra: INFO [Native-Transport-Requests-2] 2020-10-28 17:27:16,907 MigrationManager.java:355 - Create new table: org.apache.cassandra.config.CFMetaData@71bd6bba[cfId=d35597b0-1942-11eb-a22c-7f6011a90b8b,ksName=tests,cfName=cortex_chunks_2651,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.BytesType),partitionColumns=[[] | [value]],partitionKeyColumns=[hash],clusteringColumns=[range],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[hash, range, value],droppedColumns={},triggers=[],indexes=[]] 17:27:17 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:17,078 ColumnFamilyStore.java:406 - Initializing tests.cortex_chunks_2651 17:27:17 Stopping table-manager 17:27:17 Starting distributor 17:27:18 Ports for container: e2e-cortex-test-distributor Mapping: map[80:32947 9095:32946] 17:27:18 Starting ingester 17:27:19 Ports for container: e2e-cortex-test-ingester Mapping: map[80:32949 9095:32948] 17:27:19 Starting querier 17:27:21 Ports for container: e2e-cortex-test-querier Mapping: map[80:32951 9095:32950] 17:27:31 Starting purger 17:27:32 Ports for container: e2e-cortex-test-purger Mapping: map[80:32953 9095:32952] 17:27:32 purger: level=warn ts=2020-10-28T17:27:32.152361349Z caller=experimental.go:19 msg="experimental feature in use" feature="Delete series API" 17:27:32 Stopping purger 17:27:33 Starting purger 17:27:34 Ports for container: e2e-cortex-test-purger Mapping: map[80:32955 9095:32954] 17:27:34 purger: level=warn ts=2020-10-28T17:27:34.232641854Z caller=experimental.go:19 msg="experimental feature in use" feature="Delete series API" 17:27:34 purger: level=error ts=2020-10-28T17:27:34.82559081Z caller=purger.go:240 user_id=user-1 request_id=70043089 msg="error removing delete plan" plan_no=1 err="open /shared/user-1:70043089: no such file or directory" chunks_delete_series_test.go:182: Error Trace: chunks_delete_series_test.go:182 Error: Received unexpected error: unable to find metrics [cortex_purger_delete_requests_processed_total] with expected values. Last error: <nil>. Last values: [2] Test: TestDeleteSeriesAllIndexBackends 17:27:57 Killing purger 17:27:57 Killing querier 17:27:57 querier: level=error ts=2020-10-28T17:27:57.74970605Z caller=client.go:229 msg="error getting path" key=collectors/ring err="Get \"http://e2e-cortex-test-consul:8500/v1/kv/collectors/ring?index=23&stale=&wait=10000ms\": context canceled" 17:27:58 Killing ingester 17:27:58 ingester: level=warn ts=2020-10-28T17:27:58.162027377Z caller=transfer.go:294 msg="transfer attempt failed" err="cannot find ingester to transfer chunks to: no pending ingesters" attempt=1 max_retries=10 17:27:58 Killing distributor 17:27:58 distributor: level=error ts=2020-10-28T17:27:58.598556915Z caller=client.go:229 msg="error getting path" key=collectors/ring err="Get \"http://e2e-cortex-test-consul:8500/v1/kv/collectors/ring?index=25&stale=&wait=10000ms\": context canceled" 17:27:58 Killing consul 17:27:58 consul: 2020-10-28T17:27:58.983Z [ERROR] agent.server: error performing anti-entropy sync of federation state: error="context canceled" 17:27:59 Killing bigtable 17:27:59 bigtable: done 17:27:59 Killing dynamodb 17:28:00 Killing cassandra 17:28:00 cassandra: INFO [StorageServiceShutdownHook] 2020-10-28 17:28:00,231 HintsService.java:220 - Paused hints dispatch 17:28:00 cassandra: INFO [StorageServiceShutdownHook] 2020-10-28 17:28:00,240 Server.java:176 - Stop listening for CQL clients 17:28:00 cassandra: INFO [StorageServiceShutdownHook] 2020-10-28 17:28:00,245 Gossiper.java:1530 - Announcing shutdown 17:28:00 cassandra: INFO [StorageServiceShutdownHook] 2020-10-28 17:28:00,251 StorageService.java:2255 - Node /192.168.48.2 state jump to shutdown --- FAIL: TestDeleteSeriesAllIndexBackends (69.63s) ``` **To Reproduce** It failed in this [CI execution](https://app.circleci.com/pipelines/github/cortexproject/cortex/9160/workflows/e9bd5e4d-28fe-4a5c-8ef3-759ac45c22ff/jobs/44679).
1.0
Flaky TestDeleteSeriesAllIndexBackends - **Describe the bug** The integration test `TestDeleteSeriesAllIndexBackends` looks flaky: ``` === RUN TestDeleteSeriesAllIndexBackends 17:26:51 Starting cassandra 17:26:52 Ports for container: e2e-cortex-test-cassandra Mapping: map[9042:32936] 17:26:52 Starting dynamodb 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.deserializeLargeSubset (Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/Columns;I)Lorg/apache/cassandra/db/Columns; 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubset (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;ILorg/apache/cassandra/io/util/DataOutputPlus;)V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/Columns$Serializer.serializeLargeSubsetSize (Ljava/util/Collection;ILorg/apache/cassandra/db/Columns;I)I 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.advanceAllocatingFrom (Lorg/apache/cassandra/db/commitlog/CommitLogSegment;)V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/transform/BaseIterator.tryGetMoreContents ()Z 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stop ()V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/db/transform/StoppingTransformation.stopInPartition ()V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.doFlush (I)V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeExcessSlow ()V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/io/util/BufferedDataOutputStreamPlus.writeSlow (JI)V 17:26:53 cassandra: CompilerOracle: dontinline org/apache/cassandra/io/util/RebufferingInputStream.readPrimitiveSlowly (I)J 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/db/rows/UnfilteredSerializer.serializeRowBody (Lorg/apache/cassandra/db/rows/Row;ILorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/io/util/DataOutputPlus;)V 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.selectBoundary (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;II)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/AsymmetricOrdering.strictnessOfLessThan (Lorg/apache/cassandra/utils/AsymmetricOrdering/Op;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.indexes (Lorg/apache/cassandra/utils/IFilter/FilterKey;)[J 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/BloomFilter.setIndexes (JJIJ[J)V 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I 17:26:53 cassandra: CompilerOracle: inline org/apache/cassandra/utils/vint/VIntCoding.encodeVInt (JI)[B 17:26:54 Ports for container: e2e-cortex-test-dynamodb Mapping: map[8000:32937] 17:26:54 Starting bigtable 17:26:54 dynamodb: Initializing DynamoDB Local with the following configuration: 17:26:54 dynamodb: Port: 8000 17:26:54 dynamodb: InMemory: true 17:26:54 dynamodb: DbPath: null 17:26:54 dynamodb: SharedDb: true 17:26:54 dynamodb: shouldDelayTransientStatuses: false 17:26:54 dynamodb: CorsParams: * 17:26:55 cassandra: INFO [main] 2020-10-28 17:26:55,542 YamlConfigurationLoader.java:89 - Configuration location: file:/etc/cassandra/cassandra.yaml 17:26:55 bigtable: Bigtable emulator running on [::]:9035 17:26:56 Ports for container: e2e-cortex-test-bigtable Mapping: map[9035:32938] 17:26:56 Starting consul 17:26:56 cassandra: INFO [main] 2020-10-28 17:26:56,870 Config.java:481 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=192.168.48.2; broadcast_rpc_address=192.168.48.2; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=/var/lib/cassandra/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@37e547da; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=SimpleSnitch; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=null; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=0; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=dc; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=192.168.48.2; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=1; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=/var/lib/cassandra/saved_caches; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=192.168.48.2}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@2b6856dd; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000] 17:26:56 cassandra: INFO [main] 2020-10-28 17:26:56,871 DatabaseDescriptor.java:366 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap 17:26:56 cassandra: INFO [main] 2020-10-28 17:26:56,872 DatabaseDescriptor.java:420 - Global memtable on-heap threshold is enabled at 462MB 17:26:56 cassandra: INFO [main] 2020-10-28 17:26:56,872 DatabaseDescriptor.java:424 - Global memtable off-heap threshold is enabled at 462MB 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,151 RateBasedBackPressure.java:123 - Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000. 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,152 DatabaseDescriptor.java:710 - Back-pressure is disabled with strategy org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}. 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,373 JMXServerUtils.java:249 - Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7199/jmxrmi 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,382 CassandraDaemon.java:471 - Hostname: cassandra 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,382 CassandraDaemon.java:478 - JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_131 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,384 CassandraDaemon.java:479 - Heap size: 1.805GiB/1.805GiB 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,385 CassandraDaemon.java:484 - Code Cache Non-heap memory: init = 2555904(2496K) used = 4501952(4396K) committed = 4521984(4416K) max = 251658240(245760K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,386 CassandraDaemon.java:484 - Metaspace Non-heap memory: init = 0(0K) used = 17549904(17138K) committed = 18219008(17792K) max = -1(-1K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,387 CassandraDaemon.java:484 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2090424(2041K) committed = 2228224(2176K) max = 1073741824(1048576K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,388 CassandraDaemon.java:484 - Par Eden Space Heap memory: init = 167772160(163840K) used = 94023544(91819K) committed = 167772160(163840K) max = 167772160(163840K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,389 CassandraDaemon.java:484 - Par Survivor Space Heap memory: init = 20971520(20480K) used = 0(0K) committed = 20971520(20480K) max = 20971520(20480K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,390 CassandraDaemon.java:484 - CMS Old Gen Heap memory: init = 1749024768(1708032K) used = 0(0K) committed = 1749024768(1708032K) max = 1749024768(1708032K) 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,391 CassandraDaemon.java:486 - Classpath: /etc/cassandra:/usr/share/cassandra/lib/HdrHistogram-2.1.9.jar:/usr/share/cassandra/lib/ST4-4.0.8.jar:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/asm-5.0.4.jar:/usr/share/cassandra/lib/caffeine-2.2.6.jar:/usr/share/cassandra/lib/cassandra-driver-core-3.0.1-shaded.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.9.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrent-trees-2.4.0.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/ecj-4.4.2.jar:/usr/share/cassandra/lib/guava-18.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/hppc-0.5.4.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.3.0.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jcl-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/jctools-core-1.2.1.jar:/usr/share/cassandra/lib/jflex-1.6.0.jar:/usr/share/cassandra/lib/jna-4.4.0.jar:/usr/share/cassandra/lib/joda-time-2.4.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/jstackjunit-0.0.1.jar:/usr/share/cassandra/lib/libthrift-0.9.2.jar:/usr/share/cassandra/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/logback-classic-1.1.3.jar:/usr/share/cassandra/lib/logback-core-1.1.3.jar:/usr/share/cassandra/lib/lz4-1.3.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.0.jar:/usr/share/cassandra/lib/metrics-jvm-3.1.0.jar:/usr/share/cassandra/lib/metrics-logback-3.1.0.jar:/usr/share/cassandra/lib/netty-all-4.0.44.Final.jar:/usr/share/cassandra/lib/ohc-core-0.4.4.jar:/usr/share/cassandra/lib/ohc-core-j8-0.4.4.jar:/usr/share/cassandra/lib/reporter-config-base-3.0.3.jar:/usr/share/cassandra/lib/reporter-config3-3.0.3.jar:/usr/share/cassandra/lib/sigar-1.6.4.jar:/usr/share/cassandra/lib/slf4j-api-1.7.7.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.1.1.7.jar:/usr/share/cassandra/lib/snowball-stemmer-1.3.0.581.1.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-3.11.0.jar:/usr/share/cassandra/apache-cassandra-thrift-3.11.0.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar::/usr/share/cassandra/lib/jamm-0.3.0.jar 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,392 CassandraDaemon.java:488 - JVM Arguments: [-Xloggc:/var/log/cassandra/gc.log, -ea, -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42, -XX:+HeapDumpOnOutOfMemoryError, -Xss256k, -XX:StringTableSize=1000003, -XX:+AlwaysPreTouch, -XX:-UseBiasedLocking, -XX:+UseTLAB, -XX:+ResizeTLAB, -XX:+UseNUMA, -XX:+PerfDisableSharedMem, -Djava.net.preferIPv4Stack=true, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:CMSWaitDuration=10000, -XX:+CMSParallelInitialMarkEnabled, -XX:+CMSEdenChunksRecordAlways, -XX:+CMSClassUnloadingEnabled, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, -Xms1868M, -Xmx1868M, -Xmn200M, -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler, -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar, -Dcassandra.jmx.local.port=7199, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password, -Djava.library.path=/usr/share/cassandra/lib/sigar-bin, -Dcassandra.initial_token=0, -Dcassandra.skip_wait_for_gossip_to_settle=0, -Dcassandra.libjemalloc=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=/var/log/cassandra, -Dcassandra.storagedir=/var/lib/cassandra, -Dcassandra-foreground=yes] 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,578 NativeLibrary.java:187 - Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root. 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,579 StartupChecks.java:131 - jemalloc seems to be preloaded from /usr/lib/x86_64-linux-gnu/libjemalloc.so.1 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,580 StartupChecks.java:160 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info. 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,580 StartupChecks.java:197 - OpenJDK is not recommended. Please upgrade to the newest Oracle Java release 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,582 SigarLibrary.java:44 - Initializing SIGAR library 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,620 SigarLibrary.java:180 - Checked OS settings and found them configured for optimal performance. 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,632 StartupChecks.java:265 - Maximum number of memory map areas per process (vm.max_map_count) 65530 is too low, recommended value: 1048575, you can change it with sysctl. 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,687 StartupChecks.java:286 - Directory /var/lib/cassandra/data doesn't exist 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,701 StartupChecks.java:286 - Directory /var/lib/cassandra/commitlog doesn't exist 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,702 StartupChecks.java:286 - Directory /var/lib/cassandra/saved_caches doesn't exist 17:26:57 cassandra: WARN [main] 2020-10-28 17:26:57,709 StartupChecks.java:286 - Directory /var/lib/cassandra/hints doesn't exist 17:26:57 Ports for container: e2e-cortex-test-consul Mapping: map[8500:32939] 17:26:57 consul: ==> Starting Consul agent... 17:26:57 consul: Version: '1.8.4' 17:26:57 consul: Node ID: 'd7e32fc8-ba7c-3343-1de9-6a94a903d1fd' 17:26:57 consul: Node name: 'consul' 17:26:57 consul: Datacenter: 'dc1' (Segment: '<all>') 17:26:57 consul: Server: true (Bootstrap: false) 17:26:57 consul: Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600) 17:26:57 consul: Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302) 17:26:57 consul: Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false 17:26:57 consul: ==> Log data will now stream in as it occurs: 17:26:57 consul: ==> Consul agent running! 17:26:57 cassandra: INFO [main] 2020-10-28 17:26:57,880 QueryProcessor.java:115 - Initialized prepared statement caches with 10 MB (native) and 10 MB (Thrift) 17:26:58 cassandra: INFO [main] 2020-10-28 17:26:58,875 ColumnFamilyStore.java:406 - Initializing system.IndexInfo 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,085 ColumnFamilyStore.java:406 - Initializing system.batches 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,092 ColumnFamilyStore.java:406 - Initializing system.paxos 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,105 ColumnFamilyStore.java:406 - Initializing system.local 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,121 ColumnFamilyStore.java:406 - Initializing system.peers 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,138 ColumnFamilyStore.java:406 - Initializing system.peer_events 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,152 ColumnFamilyStore.java:406 - Initializing system.range_xfers 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,162 ColumnFamilyStore.java:406 - Initializing system.compaction_history 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,174 ColumnFamilyStore.java:406 - Initializing system.sstable_activity 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,187 ColumnFamilyStore.java:406 - Initializing system.size_estimates 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,197 ColumnFamilyStore.java:406 - Initializing system.available_ranges 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,211 ColumnFamilyStore.java:406 - Initializing system.transferred_ranges 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,223 ColumnFamilyStore.java:406 - Initializing system.views_builds_in_progress 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,233 ColumnFamilyStore.java:406 - Initializing system.built_views 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,243 ColumnFamilyStore.java:406 - Initializing system.hints 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,252 ColumnFamilyStore.java:406 - Initializing system.batchlog 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,259 ColumnFamilyStore.java:406 - Initializing system.prepared_statements 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,265 ColumnFamilyStore.java:406 - Initializing system.schema_keyspaces 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,273 ColumnFamilyStore.java:406 - Initializing system.schema_columnfamilies 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,281 ColumnFamilyStore.java:406 - Initializing system.schema_columns 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,290 ColumnFamilyStore.java:406 - Initializing system.schema_triggers 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,299 ColumnFamilyStore.java:406 - Initializing system.schema_usertypes 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,308 ColumnFamilyStore.java:406 - Initializing system.schema_functions 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,320 ColumnFamilyStore.java:406 - Initializing system.schema_aggregates 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,322 ViewManager.java:137 - Not submitting build tasks for views in keyspace system as storage service is not initialized 17:27:01 cassandra: INFO [main] 2020-10-28 17:27:01,431 ApproximateTime.java:44 - Scheduling approximate time-check task with a precision of 10 milliseconds 17:27:02 cassandra: INFO [MemtableFlushWriter:1] 2020-10-28 17:27:02,344 CacheService.java:112 - Initializing key cache with capacity of 92 MBs. 17:27:02 cassandra: INFO [MemtableFlushWriter:1] 2020-10-28 17:27:02,356 CacheService.java:134 - Initializing row cache with capacity of 0 MBs 17:27:02 cassandra: INFO [MemtableFlushWriter:1] 2020-10-28 17:27:02,359 CacheService.java:163 - Initializing counter cache with capacity of 46 MBs 17:27:02 cassandra: INFO [MemtableFlushWriter:1] 2020-10-28 17:27:02,361 CacheService.java:174 - Scheduling counter cache save to every 7200 seconds (going to save all keys). 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,716 StorageService.java:599 - Populating token metadata from system tables 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,771 BufferPool.java:230 - Global buffer pool is enabled, when pool is exhausted (max is 462.000MiB) it will allocate on heap 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,952 StorageService.java:606 - Token metadata: 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,984 ColumnFamilyStore.java:406 - Initializing system_schema.keyspaces 17:27:02 cassandra: INFO [main] 2020-10-28 17:27:02,994 ColumnFamilyStore.java:406 - Initializing system_schema.tables 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,010 ColumnFamilyStore.java:406 - Initializing system_schema.columns 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,033 ColumnFamilyStore.java:406 - Initializing system_schema.triggers 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,052 ColumnFamilyStore.java:406 - Initializing system_schema.dropped_columns 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,059 ColumnFamilyStore.java:406 - Initializing system_schema.views 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,067 ColumnFamilyStore.java:406 - Initializing system_schema.types 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,077 ColumnFamilyStore.java:406 - Initializing system_schema.functions 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,094 ColumnFamilyStore.java:406 - Initializing system_schema.aggregates 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,108 ColumnFamilyStore.java:406 - Initializing system_schema.indexes 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,115 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_schema as storage service is not initialized 17:27:03 cassandra: INFO [pool-3-thread-1] 2020-10-28 17:27:03,180 AutoSavingCache.java:173 - Completed loading (13 ms; 4 keys) KeyCache cache 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,238 CommitLog.java:152 - No commitlog files found; skipping replay 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,238 StorageService.java:599 - Populating token metadata from system tables 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,292 StorageService.java:606 - Token metadata: 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,567 QueryProcessor.java:162 - Preloaded 0 prepared statements 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,569 StorageService.java:617 - Cassandra version: 3.11.0 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,569 StorageService.java:618 - Thrift API version: 20.1.0 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,570 StorageService.java:619 - CQL supported versions: 3.4.4 (default: 3.4.4) 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,570 StorageService.java:621 - Native protocol supported versions: 3/v3, 4/v4, 5/v5-beta (default: 4/v4) 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,650 IndexSummaryManager.java:85 - Initializing index summary manager with a memory pool size of 92 MB and a resize interval of 60 minutes 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,679 MessagingService.java:753 - Starting Messaging Service on /192.168.48.2:7000 (eth0) 17:27:03 cassandra: WARN [main] 2020-10-28 17:27:03,699 SystemKeyspace.java:1083 - No host ID found, created 809ce138-4196-424c-bdf8-b6b79111fe01 (Note: This should happen exactly once per node). 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,792 StorageService.java:706 - Loading persisted ring state 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,816 StorageService.java:819 - Starting up server gossip 17:27:03 cassandra: INFO [main] 2020-10-28 17:27:03,975 StorageService.java:857 - This node will not auto bootstrap because it is configured to be a seed node. 17:27:04 cassandra: INFO [main] 2020-10-28 17:27:04,015 StorageService.java:984 - Saved tokens not found. Using configuration value: [0] 17:27:04 cassandra: INFO [main] 2020-10-28 17:27:04,040 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=system_traces, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=2}}, tables=[org.apache.cassandra.config.CFMetaData@e3b3b75[cfId=c5e99f16-8677-3914-b17e-960613512345,ksName=system_traces,cfName=sessions,flags=[COMPOUND],params=TableParams{comment=tracing sessions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=0, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [client command coordinator duration request started_at parameters]],partitionKeyColumns=[session_id],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UUIDType,columnMetadata=[client, command, session_id, coordinator, request, started_at, duration, parameters],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@54f732c4[cfId=8826e8e9-e16a-3728-8753-3bc1fc713c25,ksName=system_traces,cfName=events,flags=[COMPOUND],params=TableParams{comment=tracing events, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=0, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.TimeUUIDType),partitionColumns=[[] | [activity source source_elapsed thread]],partitionKeyColumns=[session_id],clusteringColumns=[event_id],keyValidator=org.apache.cassandra.db.marshal.UUIDType,columnMetadata=[activity, event_id, session_id, source, thread, source_elapsed],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]} 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,597 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_traces as storage service is not initialized 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,658 ColumnFamilyStore.java:406 - Initializing system_traces.events 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,701 ColumnFamilyStore.java:406 - Initializing system_traces.sessions 17:27:04 cassandra: INFO [main] 2020-10-28 17:27:04,790 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=system_distributed, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=3}}, tables=[org.apache.cassandra.config.CFMetaData@62103d37[cfId=759fffad-624b-3181-80ee-fa9a52d1f627,ksName=system_distributed,cfName=repair_history,flags=[COMPOUND],params=TableParams{comment=Repair history, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.TimeUUIDType),partitionColumns=[[] | [coordinator exception_message exception_stacktrace finished_at parent_id range_begin range_end started_at status participants]],partitionKeyColumns=[keyspace_name, columnfamily_name],clusteringColumns=[id],keyValidator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),columnMetadata=[status, id, coordinator, finished_at, participants, exception_stacktrace, parent_id, range_end, range_begin, exception_message, keyspace_name, started_at, columnfamily_name],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@53493c07[cfId=deabd734-b99d-3b9c-92e5-fd92eb5abf14,ksName=system_distributed,cfName=parent_repair_history,flags=[COMPOUND],params=TableParams{comment=Repair history, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [exception_message exception_stacktrace finished_at keyspace_name started_at columnfamily_names options requested_ranges successful_ranges]],partitionKeyColumns=[parent_id],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.TimeUUIDType,columnMetadata=[requested_ranges, exception_message, keyspace_name, successful_ranges, started_at, finished_at, options, exception_stacktrace, parent_id, columnfamily_names],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@550fa2cc[cfId=5582b59f-8e4e-35e1-b913-3acada51eb04,ksName=system_distributed,cfName=view_build_status,flags=[COMPOUND],params=TableParams{comment=Materialized View build status, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UUIDType),partitionColumns=[[] | [status]],partitionKeyColumns=[keyspace_name, view_name],clusteringColumns=[host_id],keyValidator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),columnMetadata=[view_name, status, keyspace_name, host_id],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]} 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,974 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_distributed as storage service is not initialized 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,982 ColumnFamilyStore.java:406 - Initializing system_distributed.parent_repair_history 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,990 ColumnFamilyStore.java:406 - Initializing system_distributed.repair_history 17:27:04 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:04,998 ColumnFamilyStore.java:406 - Initializing system_distributed.view_build_status 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,051 StorageService.java:1439 - JOINING: Finish joining ring 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,191 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=system_auth, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=1}}, tables=[org.apache.cassandra.config.CFMetaData@50a18b03[cfId=5bc52802-de25-35ed-aeab-188eecebb090,ksName=system_auth,cfName=roles,flags=[COMPOUND],params=TableParams{comment=role definitions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [can_login is_superuser salted_hash member_of]],partitionKeyColumns=[role],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[salted_hash, member_of, role, can_login, is_superuser],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@4c3a5a64[cfId=0ecdaa87-f8fb-3e60-88d1-74fb36fe5c0d,ksName=system_auth,cfName=role_members,flags=[COMPOUND],params=TableParams{comment=role memberships lookup table, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | []],partitionKeyColumns=[role],clusteringColumns=[member],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[role, member],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@721639c8[cfId=3afbe79f-2194-31a7-add7-f5ab90d8ec9c,ksName=system_auth,cfName=role_permissions,flags=[COMPOUND],params=TableParams{comment=permissions granted to db roles, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | [permissions]],partitionKeyColumns=[role],clusteringColumns=[resource],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[role, resource, permissions],droppedColumns={},triggers=[],indexes=[]], org.apache.cassandra.config.CFMetaData@22f0a26c[cfId=5f2fbdad-91f1-3946-bd25-d5da3a5c35ec,ksName=system_auth,cfName=resource_role_permissons_index,flags=[COMPOUND],params=TableParams{comment=index of db roles with permissions granted on a resource, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, default_time_to_live=0, memtable_flush_period_in_ms=3600000, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.UTF8Type),partitionColumns=[[] | []],partitionKeyColumns=[resource],clusteringColumns=[role],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[resource, role],droppedColumns={},triggers=[],indexes=[]]], views=[], functions=[], types=[]} 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,320 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_auth as storage service is not initialized 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,325 ColumnFamilyStore.java:406 - Initializing system_auth.resource_role_permissons_index 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,330 ColumnFamilyStore.java:406 - Initializing system_auth.role_members 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,336 ColumnFamilyStore.java:406 - Initializing system_auth.role_permissions 17:27:05 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:05,341 ColumnFamilyStore.java:406 - Initializing system_auth.roles 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,481 NativeTransportService.java:70 - Netty using native Epoll event loop 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,549 Server.java:155 - Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a] 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,550 Server.java:156 - Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)... 17:27:05 cassandra: INFO [main] 2020-10-28 17:27:05,607 CassandraDaemon.java:527 - Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it 17:27:07 Starting table-manager 17:27:08 Ports for container: e2e-cortex-test-table-manager Mapping: map[80:32941 9095:32940] 17:27:12 Stopping table-manager 17:27:13 Starting table-manager 17:27:14 Ports for container: e2e-cortex-test-table-manager Mapping: map[80:32943 9095:32942] 17:27:14 Stopping table-manager 17:27:14 Starting table-manager 17:27:15 cassandra: INFO [OptionalTasks:1] 2020-10-28 17:27:15,635 CassandraRoleManager.java:355 - Created default superuser role 'cassandra' 17:27:15 Ports for container: e2e-cortex-test-table-manager Mapping: map[80:32945 9095:32944] 17:27:15 table-manager: level=error ts=2020-10-28T17:27:15.884067436Z caller=connectionpool.go:523 module=gocql client=table-manager msg="failed to connect" address=192.168.48.2:9042 error="Keyspace 'tests' does not exist" 17:27:16 table-manager: level=error ts=2020-10-28T17:27:16.011569008Z caller=connectionpool.go:523 module=gocql client=table-manager msg="failed to connect" address=192.168.48.2:9042 error="Keyspace 'tests' does not exist" 17:27:16 cassandra: INFO [Native-Transport-Requests-3] 2020-10-28 17:27:16,118 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=tests, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=1}}, tables=[], views=[], functions=[], types=[]} 17:27:16 cassandra: INFO [Native-Transport-Requests-2] 2020-10-28 17:27:16,361 MigrationManager.java:355 - Create new table: org.apache.cassandra.config.CFMetaData@24d9f512[cfId=d3024790-1942-11eb-a22c-7f6011a90b8b,ksName=tests,cfName=cortex_2650,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.BytesType),partitionColumns=[[] | [value]],partitionKeyColumns=[hash],clusteringColumns=[range],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[hash, range, value],droppedColumns={},triggers=[],indexes=[]] 17:27:16 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:16,512 ColumnFamilyStore.java:406 - Initializing tests.cortex_2650 17:27:16 cassandra: INFO [Native-Transport-Requests-2] 2020-10-28 17:27:16,560 MigrationManager.java:355 - Create new table: org.apache.cassandra.config.CFMetaData@533764ba[cfId=d320a500-1942-11eb-a22c-7f6011a90b8b,ksName=tests,cfName=cortex_2651,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.BytesType),partitionColumns=[[] | [value]],partitionKeyColumns=[hash],clusteringColumns=[range],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[hash, range, value],droppedColumns={},triggers=[],indexes=[]] 17:27:16 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:16,698 ColumnFamilyStore.java:406 - Initializing tests.cortex_2651 17:27:16 cassandra: INFO [Native-Transport-Requests-3] 2020-10-28 17:27:16,725 MigrationManager.java:355 - Create new table: org.apache.cassandra.config.CFMetaData@2884a214[cfId=d339ab40-1942-11eb-a22c-7f6011a90b8b,ksName=tests,cfName=cortex_chunks_2650,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.BytesType),partitionColumns=[[] | [value]],partitionKeyColumns=[hash],clusteringColumns=[range],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[hash, range, value],droppedColumns={},triggers=[],indexes=[]] 17:27:16 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:16,862 ColumnFamilyStore.java:406 - Initializing tests.cortex_chunks_2650 17:27:16 cassandra: INFO [Native-Transport-Requests-2] 2020-10-28 17:27:16,907 MigrationManager.java:355 - Create new table: org.apache.cassandra.config.CFMetaData@71bd6bba[cfId=d35597b0-1942-11eb-a22c-7f6011a90b8b,ksName=tests,cfName=cortex_chunks_2651,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams@9a6cfe3, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.BytesType),partitionColumns=[[] | [value]],partitionKeyColumns=[hash],clusteringColumns=[range],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[hash, range, value],droppedColumns={},triggers=[],indexes=[]] 17:27:17 cassandra: INFO [MigrationStage:1] 2020-10-28 17:27:17,078 ColumnFamilyStore.java:406 - Initializing tests.cortex_chunks_2651 17:27:17 Stopping table-manager 17:27:17 Starting distributor 17:27:18 Ports for container: e2e-cortex-test-distributor Mapping: map[80:32947 9095:32946] 17:27:18 Starting ingester 17:27:19 Ports for container: e2e-cortex-test-ingester Mapping: map[80:32949 9095:32948] 17:27:19 Starting querier 17:27:21 Ports for container: e2e-cortex-test-querier Mapping: map[80:32951 9095:32950] 17:27:31 Starting purger 17:27:32 Ports for container: e2e-cortex-test-purger Mapping: map[80:32953 9095:32952] 17:27:32 purger: level=warn ts=2020-10-28T17:27:32.152361349Z caller=experimental.go:19 msg="experimental feature in use" feature="Delete series API" 17:27:32 Stopping purger 17:27:33 Starting purger 17:27:34 Ports for container: e2e-cortex-test-purger Mapping: map[80:32955 9095:32954] 17:27:34 purger: level=warn ts=2020-10-28T17:27:34.232641854Z caller=experimental.go:19 msg="experimental feature in use" feature="Delete series API" 17:27:34 purger: level=error ts=2020-10-28T17:27:34.82559081Z caller=purger.go:240 user_id=user-1 request_id=70043089 msg="error removing delete plan" plan_no=1 err="open /shared/user-1:70043089: no such file or directory" chunks_delete_series_test.go:182: Error Trace: chunks_delete_series_test.go:182 Error: Received unexpected error: unable to find metrics [cortex_purger_delete_requests_processed_total] with expected values. Last error: <nil>. Last values: [2] Test: TestDeleteSeriesAllIndexBackends 17:27:57 Killing purger 17:27:57 Killing querier 17:27:57 querier: level=error ts=2020-10-28T17:27:57.74970605Z caller=client.go:229 msg="error getting path" key=collectors/ring err="Get \"http://e2e-cortex-test-consul:8500/v1/kv/collectors/ring?index=23&stale=&wait=10000ms\": context canceled" 17:27:58 Killing ingester 17:27:58 ingester: level=warn ts=2020-10-28T17:27:58.162027377Z caller=transfer.go:294 msg="transfer attempt failed" err="cannot find ingester to transfer chunks to: no pending ingesters" attempt=1 max_retries=10 17:27:58 Killing distributor 17:27:58 distributor: level=error ts=2020-10-28T17:27:58.598556915Z caller=client.go:229 msg="error getting path" key=collectors/ring err="Get \"http://e2e-cortex-test-consul:8500/v1/kv/collectors/ring?index=25&stale=&wait=10000ms\": context canceled" 17:27:58 Killing consul 17:27:58 consul: 2020-10-28T17:27:58.983Z [ERROR] agent.server: error performing anti-entropy sync of federation state: error="context canceled" 17:27:59 Killing bigtable 17:27:59 bigtable: done 17:27:59 Killing dynamodb 17:28:00 Killing cassandra 17:28:00 cassandra: INFO [StorageServiceShutdownHook] 2020-10-28 17:28:00,231 HintsService.java:220 - Paused hints dispatch 17:28:00 cassandra: INFO [StorageServiceShutdownHook] 2020-10-28 17:28:00,240 Server.java:176 - Stop listening for CQL clients 17:28:00 cassandra: INFO [StorageServiceShutdownHook] 2020-10-28 17:28:00,245 Gossiper.java:1530 - Announcing shutdown 17:28:00 cassandra: INFO [StorageServiceShutdownHook] 2020-10-28 17:28:00,251 StorageService.java:2255 - Node /192.168.48.2 state jump to shutdown --- FAIL: TestDeleteSeriesAllIndexBackends (69.63s) ``` **To Reproduce** It failed in this [CI execution](https://app.circleci.com/pipelines/github/cortexproject/cortex/9160/workflows/e9bd5e4d-28fe-4a5c-8ef3-759ac45c22ff/jobs/44679).
non_main
flaky testdeleteseriesallindexbackends describe the bug the integration test testdeleteseriesallindexbackends looks flaky run testdeleteseriesallindexbackends starting cassandra ports for container cortex test cassandra mapping map starting dynamodb cassandra compileroracle dontinline org apache cassandra db columns serializer deserializelargesubset lorg apache cassandra io util datainputplus lorg apache cassandra db columns i lorg apache cassandra db columns cassandra compileroracle dontinline org apache cassandra db columns serializer serializelargesubset ljava util collection ilorg apache cassandra db columns ilorg apache cassandra io util dataoutputplus v cassandra compileroracle dontinline org apache cassandra db columns serializer serializelargesubsetsize ljava util collection ilorg apache cassandra db columns i i cassandra compileroracle dontinline org apache cassandra db commitlog abstractcommitlogsegmentmanager advanceallocatingfrom lorg apache cassandra db commitlog commitlogsegment v cassandra compileroracle dontinline org apache cassandra db transform baseiterator trygetmorecontents z cassandra compileroracle dontinline org apache cassandra db transform stoppingtransformation stop v cassandra compileroracle dontinline org apache cassandra db transform stoppingtransformation stopinpartition v cassandra compileroracle dontinline org apache cassandra io util buffereddataoutputstreamplus doflush i v cassandra compileroracle dontinline org apache cassandra io util buffereddataoutputstreamplus writeexcessslow v cassandra compileroracle dontinline org apache cassandra io util buffereddataoutputstreamplus writeslow ji v cassandra compileroracle dontinline org apache cassandra io util rebufferinginputstream readprimitiveslowly i j cassandra compileroracle inline org apache cassandra db rows unfilteredserializer serializerowbody lorg apache cassandra db rows row ilorg apache cassandra db serializationheader lorg apache cassandra io util dataoutputplus v cassandra compileroracle inline org apache cassandra io util memory checkbounds jj v cassandra compileroracle inline org apache cassandra io util safememory checkbounds jj v cassandra compileroracle inline org apache cassandra utils asymmetricordering selectboundary lorg apache cassandra utils asymmetricordering op ii i cassandra compileroracle inline org apache cassandra utils asymmetricordering strictnessoflessthan lorg apache cassandra utils asymmetricordering op i cassandra compileroracle inline org apache cassandra utils bloomfilter indexes lorg apache cassandra utils ifilter filterkey j cassandra compileroracle inline org apache cassandra utils bloomfilter setindexes jjij j v cassandra compileroracle inline org apache cassandra utils bytebufferutil compare ljava nio bytebuffer b i cassandra compileroracle inline org apache cassandra utils bytebufferutil compare bljava nio bytebuffer i cassandra compileroracle inline org apache cassandra utils bytebufferutil compareunsigned ljava nio bytebuffer ljava nio bytebuffer i cassandra compileroracle inline org apache cassandra utils fastbyteoperations unsafeoperations compareto ljava lang object jiljava lang object ji i cassandra compileroracle inline org apache cassandra utils fastbyteoperations unsafeoperations compareto ljava lang object jiljava nio bytebuffer i cassandra compileroracle inline org apache cassandra utils fastbyteoperations unsafeoperations compareto ljava nio bytebuffer ljava nio bytebuffer i cassandra compileroracle inline org apache cassandra utils vint vintcoding encodevint ji b ports for container cortex test dynamodb mapping map starting bigtable dynamodb initializing dynamodb local with the following configuration dynamodb port dynamodb inmemory true dynamodb dbpath null dynamodb shareddb true dynamodb shoulddelaytransientstatuses false dynamodb corsparams cassandra info yamlconfigurationloader java configuration location file etc cassandra cassandra yaml bigtable bigtable emulator running on ports for container cortex test bigtable mapping map starting consul cassandra info config java node configuration hinted handoff enabled true hinted handoff throttle in kb hints compression null hints directory null hints flush period in ms incremental backups false index interval null index summary capacity in mb null index summary resize interval in minutes initial token inter dc stream throughput outbound megabits per sec inter dc tcp nodelay false internode authenticator null internode compression dc internode recv buff size in bytes internode send buff size in bytes key cache keys to save key cache save period key cache size in mb null listen address listen interface null listen interface prefer false listen on broadcast address false max hint window in ms max hints delivery threads max hints file size in mb max mutation size in kb null max streaming retries max value size in mb memtable allocation type heap buffers memtable cleanup threshold null memtable flush writers memtable heap space in mb null memtable offheap space in mb null min free space per drive in mb native transport max concurrent connections native transport max concurrent connections per ip native transport max frame size in mb native transport max threads native transport port native transport port ssl null num tokens otc backlog expiration interval ms otc coalescing enough coalesced messages otc coalescing strategy disabled otc coalescing window us partitioner org apache cassandra dht permissions cache max entries permissions update interval in ms permissions validity in ms phi convict threshold prepared statements cache size mb null range request timeout in ms read request timeout in ms request scheduler org apache cassandra scheduler noscheduler request scheduler id null request scheduler options null request timeout in ms role manager cassandrarolemanager roles cache max entries roles update interval in ms roles validity in ms row cache class name org apache cassandra cache ohcprovider row cache keys to save row cache save period row cache size in mb rpc address rpc interface null rpc interface prefer false rpc keepalive true rpc listen backlog rpc max threads rpc min threads rpc port rpc recv buff size in bytes null rpc send buff size in bytes null rpc server type sync saved caches directory var lib cassandra saved caches seed provider org apache cassandra locator simpleseedprovider seeds server encryption options slow query log timeout in ms snapshot before compaction false ssl storage port sstable preemptive open interval in mb start native transport true start rpc false storage port stream throughput outbound megabits per sec streaming keep alive period in secs streaming socket timeout in ms thrift framed transport size in mb thrift max message length in mb thrift prepared statements cache size mb null tombstone failure threshold tombstone warn threshold tracetype query ttl tracetype repair ttl transparent data encryption options org apache cassandra config transparentdataencryptionoptions trickle fsync false trickle fsync interval in kb truncate request timeout in ms unlogged batch across partitions warn threshold user defined function fail timeout user defined function warn timeout user function timeout policy die windows timer interval write request timeout in ms cassandra info databasedescriptor java diskaccessmode auto determined to be mmap indexaccessmode is mmap cassandra info databasedescriptor java global memtable on heap threshold is enabled at cassandra info databasedescriptor java global memtable off heap threshold is enabled at cassandra info ratebasedbackpressure java initialized back pressure with high ratio factor flow fast window size cassandra info databasedescriptor java back pressure is disabled with strategy org apache cassandra net ratebasedbackpressure high ratio factor flow fast cassandra info jmxserverutils java configured jmx server at service jmx rmi jndi rmi jmxrmi cassandra info cassandradaemon java hostname cassandra cassandra info cassandradaemon java jvm vendor version openjdk bit server vm cassandra info cassandradaemon java heap size cassandra info cassandradaemon java code cache non heap memory init used committed max cassandra info cassandradaemon java metaspace non heap memory init used committed max cassandra info cassandradaemon java compressed class space non heap memory init used committed max cassandra info cassandradaemon java par eden space heap memory init used committed max cassandra info cassandradaemon java par survivor space heap memory init used committed max cassandra info cassandradaemon java cms old gen heap memory init used committed max cassandra info cassandradaemon java classpath etc cassandra usr share cassandra lib hdrhistogram jar usr share cassandra lib jar usr share cassandra lib airline jar usr share cassandra lib antlr runtime jar usr share cassandra lib asm jar usr share cassandra lib caffeine jar usr share cassandra lib cassandra driver core shaded jar usr share cassandra lib commons cli jar usr share cassandra lib commons codec jar usr share cassandra lib commons jar usr share cassandra lib commons jar usr share cassandra lib compress lzf jar usr share cassandra lib concurrent trees jar usr share cassandra lib concurrentlinkedhashmap lru jar usr share cassandra lib disruptor jar usr share cassandra lib ecj jar usr share cassandra lib guava jar usr share cassandra lib high scale lib jar usr share cassandra lib hppc jar usr share cassandra lib jackson core asl jar usr share cassandra lib jackson mapper asl jar usr share cassandra lib jamm jar usr share cassandra lib javax inject jar usr share cassandra lib jbcrypt jar usr share cassandra lib jcl over jar usr share cassandra lib jctools core jar usr share cassandra lib jflex jar usr share cassandra lib jna jar usr share cassandra lib joda time jar usr share cassandra lib json simple jar usr share cassandra lib jstackjunit jar usr share cassandra lib libthrift jar usr share cassandra lib over jar usr share cassandra lib logback classic jar usr share cassandra lib logback core jar usr share cassandra lib jar usr share cassandra lib metrics core jar usr share cassandra lib metrics jvm jar usr share cassandra lib metrics logback jar usr share cassandra lib netty all final jar usr share cassandra lib ohc core jar usr share cassandra lib ohc core jar usr share cassandra lib reporter config base jar usr share cassandra lib reporter jar usr share cassandra lib sigar jar usr share cassandra lib api jar usr share cassandra lib snakeyaml jar usr share cassandra lib snappy java jar usr share cassandra lib snowball stemmer jar usr share cassandra lib stream jar usr share cassandra lib thrift server jar usr share cassandra apache cassandra jar usr share cassandra apache cassandra thrift jar usr share cassandra apache cassandra jar usr share cassandra stress jar usr share cassandra lib jamm jar cassandra info cassandradaemon java jvm arguments cassandra warn nativelibrary java unable to lock jvm memory enomem this can result in part of the jvm being swapped out especially with mmapped i o enabled increase rlimit memlock or run cassandra as root cassandra info startupchecks java jemalloc seems to be preloaded from usr lib linux gnu libjemalloc so cassandra warn startupchecks java jmx is not enabled to receive remote connections please see cassandra env sh for more info cassandra warn startupchecks java openjdk is not recommended please upgrade to the newest oracle java release cassandra info sigarlibrary java initializing sigar library cassandra info sigarlibrary java checked os settings and found them configured for optimal performance cassandra warn startupchecks java maximum number of memory map areas per process vm max map count is too low recommended value you can change it with sysctl cassandra warn startupchecks java directory var lib cassandra data doesn t exist cassandra warn startupchecks java directory var lib cassandra commitlog doesn t exist cassandra warn startupchecks java directory var lib cassandra saved caches doesn t exist cassandra warn startupchecks java directory var lib cassandra hints doesn t exist ports for container cortex test consul mapping map consul starting consul agent consul version consul node id consul node name consul consul datacenter segment consul server true bootstrap false consul client addr http https grpc dns consul cluster addr lan wan consul encrypt gossip false tls outgoing false tls incoming false auto encrypt tls false consul log data will now stream in as it occurs consul consul agent running cassandra info queryprocessor java initialized prepared statement caches with mb native and mb thrift cassandra info columnfamilystore java initializing system indexinfo cassandra info columnfamilystore java initializing system batches cassandra info columnfamilystore java initializing system paxos cassandra info columnfamilystore java initializing system local cassandra info columnfamilystore java initializing system peers cassandra info columnfamilystore java initializing system peer events cassandra info columnfamilystore java initializing system range xfers cassandra info columnfamilystore java initializing system compaction history cassandra info columnfamilystore java initializing system sstable activity cassandra info columnfamilystore java initializing system size estimates cassandra info columnfamilystore java initializing system available ranges cassandra info columnfamilystore java initializing system transferred ranges cassandra info columnfamilystore java initializing system views builds in progress cassandra info columnfamilystore java initializing system built views cassandra info columnfamilystore java initializing system hints cassandra info columnfamilystore java initializing system batchlog cassandra info columnfamilystore java initializing system prepared statements cassandra info columnfamilystore java initializing system schema keyspaces cassandra info columnfamilystore java initializing system schema columnfamilies cassandra info columnfamilystore java initializing system schema columns cassandra info columnfamilystore java initializing system schema triggers cassandra info columnfamilystore java initializing system schema usertypes cassandra info columnfamilystore java initializing system schema functions cassandra info columnfamilystore java initializing system schema aggregates cassandra info viewmanager java not submitting build tasks for views in keyspace system as storage service is not initialized cassandra info approximatetime java scheduling approximate time check task with a precision of milliseconds cassandra info cacheservice java initializing key cache with capacity of mbs cassandra info cacheservice java initializing row cache with capacity of mbs cassandra info cacheservice java initializing counter cache with capacity of mbs cassandra info cacheservice java scheduling counter cache save to every seconds going to save all keys cassandra info storageservice java populating token metadata from system tables cassandra info bufferpool java global buffer pool is enabled when pool is exhausted max is it will allocate on heap cassandra info storageservice java token metadata cassandra info columnfamilystore java initializing system schema keyspaces cassandra info columnfamilystore java initializing system schema tables cassandra info columnfamilystore java initializing system schema columns cassandra info columnfamilystore java initializing system schema triggers cassandra info columnfamilystore java initializing system schema dropped columns cassandra info columnfamilystore java initializing system schema views cassandra info columnfamilystore java initializing system schema types cassandra info columnfamilystore java initializing system schema functions cassandra info columnfamilystore java initializing system schema aggregates cassandra info columnfamilystore java initializing system schema indexes cassandra info viewmanager java not submitting build tasks for views in keyspace system schema as storage service is not initialized cassandra info autosavingcache java completed loading ms keys keycache cache cassandra info commitlog java no commitlog files found skipping replay cassandra info storageservice java populating token metadata from system tables cassandra info storageservice java token metadata cassandra info queryprocessor java preloaded prepared statements cassandra info storageservice java cassandra version cassandra info storageservice java thrift api version cassandra info storageservice java cql supported versions default cassandra info storageservice java native protocol supported versions beta default cassandra info indexsummarymanager java initializing index summary manager with a memory pool size of mb and a resize interval of minutes cassandra info messagingservice java starting messaging service on cassandra warn systemkeyspace java no host id found created note this should happen exactly once per node cassandra info storageservice java loading persisted ring state cassandra info storageservice java starting up server gossip cassandra info storageservice java this node will not auto bootstrap because it is configured to be a seed node cassandra info storageservice java saved tokens not found using configuration value cassandra info migrationmanager java create new keyspace keyspacemetadata name system traces params keyspaceparams durable writes true replication replicationparams class org apache cassandra locator simplestrategy replication factor tables params tableparams comment tracing sessions read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal uuidtype columnmetadata droppedcolumns triggers indexes org apache cassandra config cfmetadata params tableparams comment tracing events read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal timeuuidtype partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal uuidtype columnmetadata droppedcolumns triggers indexes views functions types cassandra info viewmanager java not submitting build tasks for views in keyspace system traces as storage service is not initialized cassandra info columnfamilystore java initializing system traces events cassandra info columnfamilystore java initializing system traces sessions cassandra info migrationmanager java create new keyspace keyspacemetadata name system distributed params keyspaceparams durable writes true replication replicationparams class org apache cassandra locator simplestrategy replication factor tables params tableparams comment repair history read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal timeuuidtype partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal compositetype org apache cassandra db marshal org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes org apache cassandra config cfmetadata params tableparams comment repair history read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal timeuuidtype columnmetadata droppedcolumns triggers indexes org apache cassandra config cfmetadata params tableparams comment materialized view build status read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal uuidtype partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal compositetype org apache cassandra db marshal org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes views functions types cassandra info viewmanager java not submitting build tasks for views in keyspace system distributed as storage service is not initialized cassandra info columnfamilystore java initializing system distributed parent repair history cassandra info columnfamilystore java initializing system distributed repair history cassandra info columnfamilystore java initializing system distributed view build status cassandra info storageservice java joining finish joining ring cassandra info migrationmanager java create new keyspace keyspacemetadata name system auth params keyspaceparams durable writes true replication replicationparams class org apache cassandra locator simplestrategy replication factor tables params tableparams comment role definitions read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes org apache cassandra config cfmetadata params tableparams comment role memberships lookup table read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes org apache cassandra config cfmetadata params tableparams comment permissions granted to db roles read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes org apache cassandra config cfmetadata params tableparams comment index of db roles with permissions granted on a resource read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes views functions types cassandra info viewmanager java not submitting build tasks for views in keyspace system auth as storage service is not initialized cassandra info columnfamilystore java initializing system auth resource role permissons index cassandra info columnfamilystore java initializing system auth role members cassandra info columnfamilystore java initializing system auth role permissions cassandra info columnfamilystore java initializing system auth roles cassandra info nativetransportservice java netty using native epoll event loop cassandra info server java using netty version cassandra info server java starting listening for cql clients on unencrypted cassandra info cassandradaemon java not starting rpc server as requested use jmx storageservice startrpcserver or nodetool enablethrift to start it starting table manager ports for container cortex test table manager mapping map stopping table manager starting table manager ports for container cortex test table manager mapping map stopping table manager starting table manager cassandra info cassandrarolemanager java created default superuser role cassandra ports for container cortex test table manager mapping map table manager level error ts caller connectionpool go module gocql client table manager msg failed to connect address error keyspace tests does not exist table manager level error ts caller connectionpool go module gocql client table manager msg failed to connect address error keyspace tests does not exist cassandra info migrationmanager java create new keyspace keyspacemetadata name tests params keyspaceparams durable writes true replication replicationparams class org apache cassandra locator simplestrategy replication factor tables views functions types cassandra info migrationmanager java create new table org apache cassandra config cfmetadata params tableparams comment read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal bytestype partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes cassandra info columnfamilystore java initializing tests cortex cassandra info migrationmanager java create new table org apache cassandra config cfmetadata params tableparams comment read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal bytestype partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes cassandra info columnfamilystore java initializing tests cortex cassandra info migrationmanager java create new table org apache cassandra config cfmetadata params tableparams comment read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal bytestype partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes cassandra info columnfamilystore java initializing tests cortex chunks cassandra info migrationmanager java create new table org apache cassandra config cfmetadata params tableparams comment read repair chance dclocal read repair chance bloom filter fp chance crc check chance gc grace seconds default time to live memtable flush period in ms min index interval max index interval speculative retry caching keys all rows per partition none compaction compactionparams class org apache cassandra db compaction sizetieredcompactionstrategy options min threshold max threshold compression org apache cassandra schema compressionparams extensions cdc false comparator comparator org apache cassandra db marshal bytestype partitioncolumns partitionkeycolumns clusteringcolumns keyvalidator org apache cassandra db marshal columnmetadata droppedcolumns triggers indexes cassandra info columnfamilystore java initializing tests cortex chunks stopping table manager starting distributor ports for container cortex test distributor mapping map starting ingester ports for container cortex test ingester mapping map starting querier ports for container cortex test querier mapping map starting purger ports for container cortex test purger mapping map purger level warn ts caller experimental go msg experimental feature in use feature delete series api stopping purger starting purger ports for container cortex test purger mapping map purger level warn ts caller experimental go msg experimental feature in use feature delete series api purger level error ts caller purger go user id user request id msg error removing delete plan plan no err open shared user no such file or directory chunks delete series test go error trace chunks delete series test go error received unexpected error unable to find metrics with expected values last error last values test testdeleteseriesallindexbackends killing purger killing querier querier level error ts caller client go msg error getting path key collectors ring err get context canceled killing ingester ingester level warn ts caller transfer go msg transfer attempt failed err cannot find ingester to transfer chunks to no pending ingesters attempt max retries killing distributor distributor level error ts caller client go msg error getting path key collectors ring err get context canceled killing consul consul agent server error performing anti entropy sync of federation state error context canceled killing bigtable bigtable done killing dynamodb killing cassandra cassandra info hintsservice java paused hints dispatch cassandra info server java stop listening for cql clients cassandra info gossiper java announcing shutdown cassandra info storageservice java node state jump to shutdown fail testdeleteseriesallindexbackends to reproduce it failed in this
0
4,204
20,604,556,812
IssuesEvent
2022-03-06 19:37:20
truecharts/apps
https://api.github.com/repos/truecharts/apps
closed
Add Locast2Plex
New App Request No-Maintainer
A Python script and Docker image to connect Locast to Plex's live tv/dvr feature. Seems to work for Emby, Jellyfin, and Kodi too! Uses ffmpeg, python, and a few awesome python modules to do most of the heavy lifting. A lot of code was inspired from telly as well. https://github.com/tgorgdotcom/locast2plex
True
Add Locast2Plex - A Python script and Docker image to connect Locast to Plex's live tv/dvr feature. Seems to work for Emby, Jellyfin, and Kodi too! Uses ffmpeg, python, and a few awesome python modules to do most of the heavy lifting. A lot of code was inspired from telly as well. https://github.com/tgorgdotcom/locast2plex
main
add a python script and docker image to connect locast to plex s live tv dvr feature seems to work for emby jellyfin and kodi too uses ffmpeg python and a few awesome python modules to do most of the heavy lifting a lot of code was inspired from telly as well
1