Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
494,564
14,260,506,351
IssuesEvent
2020-11-20 09:55:51
openmsupply/mobile
https://api.github.com/repos/openmsupply/mobile
closed
AutoCompleteSelector regex error
Bug: production Effort: small Priority: normal bugsnag production
# Describe the bug From bugsnag. It seems some characters being type in the autocomplete selector are throwing errors. Specifically `\`, but some others should be checked. ## Error in mSupply Mobile **SyntaxError** in **MainActivity** SyntaxError: Invalid regular expression: \ at end of pattern This error is located at: in c in RCTView in RCTView in RCTView in f in RCTView in RCTView in RCTView in RCTModalHostView in n in ModalBox in y in l in RCTView in RCTView in RCTView in f in RCTView in RCTView in ModalBox in y in l in o in RCTView in RCTView in p in f in h in s in Connect(s) in Unknown in v in RCTView in f in RCTView in f in C in n in P in RCTView in n in RCTView in f in b in y in L in RCTView in h in C in k in v in P in Unknown in Connect(Component) in RCTView in n in Connect(n) in s in c in v in RCTView in RCTView in c [View on Bugsnag](https://app.bugsnag.com/sustainable-solutions-nz-ltd/msupply-mobile/errors/5cf930a22b0061001a37cce6?event_id=5cf930a20041fddad3890000&i=gh&m=ci) ## Stacktrace src/widgets/AutocompleteSelector.js:65 - filterArrayData src/widgets/AutocompleteSelector.js:82 - getData src/widgets/AutocompleteSelector.js:101 - value [View full stacktrace](https://app.bugsnag.com/sustainable-solutions-nz-ltd/msupply-mobile/errors/5cf930a22b0061001a37cce6?event_id=5cf930a20041fddad3890000&i=gh&m=ci) *Created automatically via Bugsnag*
1.0
AutoCompleteSelector regex error - # Describe the bug From bugsnag. It seems some characters being type in the autocomplete selector are throwing errors. Specifically `\`, but some others should be checked. ## Error in mSupply Mobile **SyntaxError** in **MainActivity** SyntaxError: Invalid regular expression: \ at end of pattern This error is located at: in c in RCTView in RCTView in RCTView in f in RCTView in RCTView in RCTView in RCTModalHostView in n in ModalBox in y in l in RCTView in RCTView in RCTView in f in RCTView in RCTView in ModalBox in y in l in o in RCTView in RCTView in p in f in h in s in Connect(s) in Unknown in v in RCTView in f in RCTView in f in C in n in P in RCTView in n in RCTView in f in b in y in L in RCTView in h in C in k in v in P in Unknown in Connect(Component) in RCTView in n in Connect(n) in s in c in v in RCTView in RCTView in c [View on Bugsnag](https://app.bugsnag.com/sustainable-solutions-nz-ltd/msupply-mobile/errors/5cf930a22b0061001a37cce6?event_id=5cf930a20041fddad3890000&i=gh&m=ci) ## Stacktrace src/widgets/AutocompleteSelector.js:65 - filterArrayData src/widgets/AutocompleteSelector.js:82 - getData src/widgets/AutocompleteSelector.js:101 - value [View full stacktrace](https://app.bugsnag.com/sustainable-solutions-nz-ltd/msupply-mobile/errors/5cf930a22b0061001a37cce6?event_id=5cf930a20041fddad3890000&i=gh&m=ci) *Created automatically via Bugsnag*
non_main
autocompleteselector regex error describe the bug from bugsnag it seems some characters being type in the autocomplete selector are throwing errors specifically but some others should be checked error in msupply mobile syntaxerror in mainactivity syntaxerror invalid regular expression at end of pattern this error is located at in c in rctview in rctview in rctview in f in rctview in rctview in rctview in rctmodalhostview in n in modalbox in y in l in rctview in rctview in rctview in f in rctview in rctview in modalbox in y in l in o in rctview in rctview in p in f in h in s in connect s in unknown in v in rctview in f in rctview in f in c in n in p in rctview in n in rctview in f in b in y in l in rctview in h in c in k in v in p in unknown in connect component in rctview in n in connect n in s in c in v in rctview in rctview in c stacktrace src widgets autocompleteselector js filterarraydata src widgets autocompleteselector js getdata src widgets autocompleteselector js value created automatically via bugsnag
0
4,022
4,835,697,968
IssuesEvent
2016-11-08 17:28:10
vmware/vic
https://api.github.com/repos/vmware/vic
closed
Username/Password authentication for vic-admin
area/security
This issue is the second half of #2342 representing a sign-in page for vic admin whereby a user can sign in with their vSphere credentials. These credentials should be used for actual authentication against vSphere such that commands issued via vicadmin are limited by the permissions applied to the user in question.
True
Username/Password authentication for vic-admin - This issue is the second half of #2342 representing a sign-in page for vic admin whereby a user can sign in with their vSphere credentials. These credentials should be used for actual authentication against vSphere such that commands issued via vicadmin are limited by the permissions applied to the user in question.
non_main
username password authentication for vic admin this issue is the second half of representing a sign in page for vic admin whereby a user can sign in with their vsphere credentials these credentials should be used for actual authentication against vsphere such that commands issued via vicadmin are limited by the permissions applied to the user in question
0
450,053
31,881,743,780
IssuesEvent
2023-09-16 13:04:49
junit-team/junit5
https://api.github.com/repos/junit-team/junit5
closed
Fix implementation of `RandomNumberExtension` in the User Guide
theme: documentation component: Jupiter
## Steps to reproduce Copying the relevant parts of the `RandomNumberExtension` example in the documentation [here](https://junit.org/junit5/docs/snapshot/user-guide/index.html#extensions-RandomNumberExtension): ```java class RandomNumberDemo { // Use static randomNumber0 field anywhere in the test class, // including @BeforeAll or @AfterEach lifecycle methods. @Random private static Integer randomNumber0; // Use randomNumber1 field in test methods and @BeforeEach // or @AfterEach lifecycle methods. @Random private int randomNumber1; ... } class RandomNumberExtension implements BeforeAllCallback, TestInstancePostProcessor, ParameterResolver { ... private void injectFields(Class<?> testClass, Object testInstance, Predicate<Field> predicate) { predicate = predicate.and(field -> isInteger(field.getType())); findAnnotatedFields(testClass, Random.class, predicate) .forEach(field -> { try { field.setAccessible(true); field.set(testInstance, this.random.nextInt()); } catch (Exception ex) { throw new RuntimeException(ex); } }); } ... } ``` There are a couple of issues with this code: 1. `randomNumber0` is an Integer, so it's not injected. 2. `findAnnotatedFields` is missing the fourth parameter, so it doesn't compile. 3. If `randomNumber0` is removed completely from `RandomNumberDemo`, then `randomNumber1` is not injected. I believe the root cause is that `TestMethodTestDescriptor` does not invoke `TestInstancePostProcessor` (side note: even though it does invoke `TestInstancePreDestroyCallback`). Meanwhile, `ClassBasedTestDescriptor` does, which is why `randomNumber1` is populated when `randomNumber0` is present. I can't tell which part is wrong here: the documentation or the code. Note: The code in https://github.com/junit-team/junit5/issues/3004#issuecomment-1215045050 fixes all these issues. The last issue is resolved by using `BeforeEachCallback` instead of `TestInstancePostProcessor`. ## Context - Used versions (Jupiter/Vintage/Platform): 5.9.3 - Build Tool/IDE: Eclipse 2023-03 (4.27.0) ## Deliverables - <strike>Remove bugs from documentation</strike> - <strike>Possibly change `TestMethodTestDescriptor` to start invoking `TestInstancePostProcessor` (if that's desirable)... or switch the example to use `BeforeEachCallback` (and mention the limitations with `TestMethodTestDescriptor` and `TestInstancePostProcessor`)</strike> - [x] Fix implementation of `isInteger()` in `RandomNumberExtension` in the User Guide.
1.0
Fix implementation of `RandomNumberExtension` in the User Guide - ## Steps to reproduce Copying the relevant parts of the `RandomNumberExtension` example in the documentation [here](https://junit.org/junit5/docs/snapshot/user-guide/index.html#extensions-RandomNumberExtension): ```java class RandomNumberDemo { // Use static randomNumber0 field anywhere in the test class, // including @BeforeAll or @AfterEach lifecycle methods. @Random private static Integer randomNumber0; // Use randomNumber1 field in test methods and @BeforeEach // or @AfterEach lifecycle methods. @Random private int randomNumber1; ... } class RandomNumberExtension implements BeforeAllCallback, TestInstancePostProcessor, ParameterResolver { ... private void injectFields(Class<?> testClass, Object testInstance, Predicate<Field> predicate) { predicate = predicate.and(field -> isInteger(field.getType())); findAnnotatedFields(testClass, Random.class, predicate) .forEach(field -> { try { field.setAccessible(true); field.set(testInstance, this.random.nextInt()); } catch (Exception ex) { throw new RuntimeException(ex); } }); } ... } ``` There are a couple of issues with this code: 1. `randomNumber0` is an Integer, so it's not injected. 2. `findAnnotatedFields` is missing the fourth parameter, so it doesn't compile. 3. If `randomNumber0` is removed completely from `RandomNumberDemo`, then `randomNumber1` is not injected. I believe the root cause is that `TestMethodTestDescriptor` does not invoke `TestInstancePostProcessor` (side note: even though it does invoke `TestInstancePreDestroyCallback`). Meanwhile, `ClassBasedTestDescriptor` does, which is why `randomNumber1` is populated when `randomNumber0` is present. I can't tell which part is wrong here: the documentation or the code. Note: The code in https://github.com/junit-team/junit5/issues/3004#issuecomment-1215045050 fixes all these issues. The last issue is resolved by using `BeforeEachCallback` instead of `TestInstancePostProcessor`. ## Context - Used versions (Jupiter/Vintage/Platform): 5.9.3 - Build Tool/IDE: Eclipse 2023-03 (4.27.0) ## Deliverables - <strike>Remove bugs from documentation</strike> - <strike>Possibly change `TestMethodTestDescriptor` to start invoking `TestInstancePostProcessor` (if that's desirable)... or switch the example to use `BeforeEachCallback` (and mention the limitations with `TestMethodTestDescriptor` and `TestInstancePostProcessor`)</strike> - [x] Fix implementation of `isInteger()` in `RandomNumberExtension` in the User Guide.
non_main
fix implementation of randomnumberextension in the user guide steps to reproduce copying the relevant parts of the randomnumberextension example in the documentation java class randomnumberdemo use static field anywhere in the test class including beforeall or aftereach lifecycle methods random private static integer use field in test methods and beforeeach or aftereach lifecycle methods random private int class randomnumberextension implements beforeallcallback testinstancepostprocessor parameterresolver private void injectfields class testclass object testinstance predicate predicate predicate predicate and field isinteger field gettype findannotatedfields testclass random class predicate foreach field try field setaccessible true field set testinstance this random nextint catch exception ex throw new runtimeexception ex there are a couple of issues with this code is an integer so it s not injected findannotatedfields is missing the fourth parameter so it doesn t compile if is removed completely from randomnumberdemo then is not injected i believe the root cause is that testmethodtestdescriptor does not invoke testinstancepostprocessor side note even though it does invoke testinstancepredestroycallback meanwhile classbasedtestdescriptor does which is why is populated when is present i can t tell which part is wrong here the documentation or the code note the code in fixes all these issues the last issue is resolved by using beforeeachcallback instead of testinstancepostprocessor context used versions jupiter vintage platform build tool ide eclipse deliverables remove bugs from documentation possibly change testmethodtestdescriptor to start invoking testinstancepostprocessor if that s desirable or switch the example to use beforeeachcallback and mention the limitations with testmethodtestdescriptor and testinstancepostprocessor fix implementation of isinteger in randomnumberextension in the user guide
0
667
4,195,335,465
IssuesEvent
2016-06-25 17:22:09
duckduckgo/zeroclickinfo-goodies
https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies
closed
Swift Cheat Sheet: Shows deprecated functionality
CheatSheet Maintainer Approved
Some functions shown are deprecated in the latest release of Xcode, and will be removed completely in Swift 3. For example ++ and -- should be replaced with += and -=. ------ IA Page: http://duck.co/ia/view/swift_cheat_sheet [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @gautamkrishnar
True
Swift Cheat Sheet: Shows deprecated functionality - Some functions shown are deprecated in the latest release of Xcode, and will be removed completely in Swift 3. For example ++ and -- should be replaced with += and -=. ------ IA Page: http://duck.co/ia/view/swift_cheat_sheet [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @gautamkrishnar
main
swift cheat sheet shows deprecated functionality some functions shown are deprecated in the latest release of xcode and will be removed completely in swift for example and should be replaced with and ia page gautamkrishnar
1
318,340
23,715,684,634
IssuesEvent
2022-08-30 11:35:02
veraison/docs
https://api.github.com/repos/veraison/docs
closed
Introduce a repository introduction document for Project Veraison
documentation
Introduce a repository introduction document for Project Veraison: Veraison Org has myriad of repositories. We need a high level document/README.md that explains what all repositories exists in Project Veraison and how they should be logically navigated to set the correct context!
1.0
Introduce a repository introduction document for Project Veraison - Introduce a repository introduction document for Project Veraison: Veraison Org has myriad of repositories. We need a high level document/README.md that explains what all repositories exists in Project Veraison and how they should be logically navigated to set the correct context!
non_main
introduce a repository introduction document for project veraison introduce a repository introduction document for project veraison veraison org has myriad of repositories we need a high level document readme md that explains what all repositories exists in project veraison and how they should be logically navigated to set the correct context
0
168,180
26,611,883,259
IssuesEvent
2023-01-24 01:18:00
MetaMask/metamask-extension
https://api.github.com/repos/MetaMask/metamask-extension
closed
Update story links in storybook documentation
design-system
### Description When https://github.com/MetaMask/metamask-extension/pull/17092 is merged it will change all storybook URLs and will essentially break any link references to those stories. We will need to update internal and external links. This ticket is to update all internal links inside of the repo. A good start would be to search for `[(/docs` in `.mdx` files that point to component doc pages ![Screenshot 2023-01-09 at 12 19 04 PM](https://user-images.githubusercontent.com/8112138/211400452-01473586-a506-451c-84c4-f80d39cfe74d.png) Then searching for `](/story` in `.mdx` files that point to component stort pages ![Screenshot 2023-01-09 at 12 20 32 PM](https://user-images.githubusercontent.com/8112138/211400610-847e7492-4380-4593-97a6-ab37771e52d0.png) I would also do a vert broad search `](/` and scan for any links that point to storybook pages ### Technical Details - Update all URLs to stories in `.mdx` files to use the new URLS ### Acceptance Criteria - All links to stories in MDX docs work
1.0
Update story links in storybook documentation - ### Description When https://github.com/MetaMask/metamask-extension/pull/17092 is merged it will change all storybook URLs and will essentially break any link references to those stories. We will need to update internal and external links. This ticket is to update all internal links inside of the repo. A good start would be to search for `[(/docs` in `.mdx` files that point to component doc pages ![Screenshot 2023-01-09 at 12 19 04 PM](https://user-images.githubusercontent.com/8112138/211400452-01473586-a506-451c-84c4-f80d39cfe74d.png) Then searching for `](/story` in `.mdx` files that point to component stort pages ![Screenshot 2023-01-09 at 12 20 32 PM](https://user-images.githubusercontent.com/8112138/211400610-847e7492-4380-4593-97a6-ab37771e52d0.png) I would also do a vert broad search `](/` and scan for any links that point to storybook pages ### Technical Details - Update all URLs to stories in `.mdx` files to use the new URLS ### Acceptance Criteria - All links to stories in MDX docs work
non_main
update story links in storybook documentation description when is merged it will change all storybook urls and will essentially break any link references to those stories we will need to update internal and external links this ticket is to update all internal links inside of the repo a good start would be to search for docs in mdx files that point to component doc pages then searching for story in mdx files that point to component stort pages i would also do a vert broad search and scan for any links that point to storybook pages technical details update all urls to stories in mdx files to use the new urls acceptance criteria all links to stories in mdx docs work
0
4,868
25,020,291,287
IssuesEvent
2022-11-03 23:26:37
aws/serverless-application-model
https://api.github.com/repos/aws/serverless-application-model
closed
Api Path Can't Bet Set from Reference
area/resource/api type/feature contributors/good-first-issue area/intrinsics maintainer/need-response
I want to do a cloudformation deploy with an events section that looks like this and the parameter "ThePath is set to "/good": ``` Events: Api2: Type: Api Properties: Method: ANY Path: !Ref ThePath ``` This results in: ``` Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Internal transform failure. ``` When I modify the exact same file to use an actual path instead of a parameter it works fine: ``` Events: Api2: Type: Api Properties: Method: ANY Path: /good ``` Perhaps the part of the code that is doing the validation won't let this happen because it can't verify that the parameter begins with a "/"?
True
Api Path Can't Bet Set from Reference - I want to do a cloudformation deploy with an events section that looks like this and the parameter "ThePath is set to "/good": ``` Events: Api2: Type: Api Properties: Method: ANY Path: !Ref ThePath ``` This results in: ``` Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Internal transform failure. ``` When I modify the exact same file to use an actual path instead of a parameter it works fine: ``` Events: Api2: Type: Api Properties: Method: ANY Path: /good ``` Perhaps the part of the code that is doing the validation won't let this happen because it can't verify that the parameter begins with a "/"?
main
api path can t bet set from reference i want to do a cloudformation deploy with an events section that looks like this and the parameter thepath is set to good events type api properties method any path ref thepath this results in failed to create the changeset waiter changesetcreatecomplete failed waiter encountered a terminal failure state status failed reason transform aws serverless failed with internal transform failure when i modify the exact same file to use an actual path instead of a parameter it works fine events type api properties method any path good perhaps the part of the code that is doing the validation won t let this happen because it can t verify that the parameter begins with a
1
4,898
25,155,359,957
IssuesEvent
2022-11-10 13:10:40
grafana/k6-docs
https://api.github.com/repos/grafana/k6-docs
opened
Add better information about time series and how to avoid generating too many
Area: OSS Content Type: needsMaintainerHelp
With k6 v0.41.0, some scripts that previously worked fine with millions of different time series will no longer be OK (https://github.com/grafana/k6/issues/2765). We should add more warnings about that in the [Metrics](https://k6.io/docs/using-k6/metrics/) and [Tags and Groups](https://k6.io/docs/using-k6/tags-and-groups/) sections. Maybe polish a little bit the [URL grouping section](https://k6.io/docs/using-k6/http-requests/#url-grouping) and link it from more places, or move it to its own page? :thinking:
True
Add better information about time series and how to avoid generating too many - With k6 v0.41.0, some scripts that previously worked fine with millions of different time series will no longer be OK (https://github.com/grafana/k6/issues/2765). We should add more warnings about that in the [Metrics](https://k6.io/docs/using-k6/metrics/) and [Tags and Groups](https://k6.io/docs/using-k6/tags-and-groups/) sections. Maybe polish a little bit the [URL grouping section](https://k6.io/docs/using-k6/http-requests/#url-grouping) and link it from more places, or move it to its own page? :thinking:
main
add better information about time series and how to avoid generating too many with some scripts that previously worked fine with millions of different time series will no longer be ok we should add more warnings about that in the and sections maybe polish a little bit the and link it from more places or move it to its own page thinking
1
147,201
19,501,457,509
IssuesEvent
2021-12-28 04:29:49
loftwah/casualcoder.io
https://api.github.com/repos/loftwah/casualcoder.io
closed
CVE-2021-37701 (High) detected in tar-6.1.0.tgz - autoclosed
security vulnerability
## CVE-2021-37701 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary> <p>tar for node</p> <p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p> <p>Path to dependency file: /wp-content/themes/twentytwenty/package.json</p> <p>Path to vulnerable library: /wp-content/themes/twentytwenty/node_modules/tar/package.json,/wp-content/themes/twentynineteen/node_modules/tar/package.json</p> <p> Dependency Hierarchy: - node-sass-6.0.0.tgz (Root Library) - node-gyp-7.1.2.tgz - :x: **tar-6.1.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/loftwah/casualcoder.io/commit/14ae92bd92b84f61737b3ee22fa2cc32e9fbdf03">14ae92bd92b84f61737b3ee22fa2cc32e9fbdf03</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc. <p>Publish Date: 2021-08-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p> <p>Release Date: 2021-08-31</p> <p>Fix Resolution (tar): 6.1.7</p> <p>Direct dependency fix Resolution (node-sass): 6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-37701 (High) detected in tar-6.1.0.tgz - autoclosed - ## CVE-2021-37701 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary> <p>tar for node</p> <p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p> <p>Path to dependency file: /wp-content/themes/twentytwenty/package.json</p> <p>Path to vulnerable library: /wp-content/themes/twentytwenty/node_modules/tar/package.json,/wp-content/themes/twentynineteen/node_modules/tar/package.json</p> <p> Dependency Hierarchy: - node-sass-6.0.0.tgz (Root Library) - node-gyp-7.1.2.tgz - :x: **tar-6.1.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/loftwah/casualcoder.io/commit/14ae92bd92b84f61737b3ee22fa2cc32e9fbdf03">14ae92bd92b84f61737b3ee22fa2cc32e9fbdf03</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc. <p>Publish Date: 2021-08-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p> <p>Release Date: 2021-08-31</p> <p>Fix Resolution (tar): 6.1.7</p> <p>Direct dependency fix Resolution (node-sass): 6.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in tar tgz autoclosed cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file wp content themes twentytwenty package json path to vulnerable library wp content themes twentytwenty node modules tar package json wp content themes twentynineteen node modules tar package json dependency hierarchy node sass tgz root library node gyp tgz x tar tgz vulnerable library found in head commit a href found in base branch main vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems the cache checking logic used both and characters as path separators however is a valid filename character on posix systems by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite additionally a similar confusion could arise on case insensitive filesystems if a tar archive contained a directory at foo followed by a symbolic link named foo then on case insensitive file systems the creation of the symbolic link would remove the directory from the filesystem but not from the internal directory cache as it would not be treated as a cache hit a subsequent file entry within the foo directory would then be placed in the target of the symbolic link thinking that the directory had already been created these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution node sass step up your open source security game with whitesource
0
4,680
24,184,086,168
IssuesEvent
2022-09-23 11:44:05
beyarkay/eskom-calendar
https://api.github.com/repos/beyarkay/eskom-calendar
opened
Schedule missing for Nelson Mandela Bay
bug waiting-on-maintainer missing-area-schedule
Schedules available here: https://www.nelsonmandelabay.gov.za/page/loadshedding Schedule found to be missing by https://github.com/beyarkay/eskom-calendar/issues/78#issuecomment-1256098341 #### Technical details Nelson mandela bay looks to be less well formatted than other areas, so adding schedules might take a while to get accurate.
True
Schedule missing for Nelson Mandela Bay - Schedules available here: https://www.nelsonmandelabay.gov.za/page/loadshedding Schedule found to be missing by https://github.com/beyarkay/eskom-calendar/issues/78#issuecomment-1256098341 #### Technical details Nelson mandela bay looks to be less well formatted than other areas, so adding schedules might take a while to get accurate.
main
schedule missing for nelson mandela bay schedules available here schedule found to be missing by technical details nelson mandela bay looks to be less well formatted than other areas so adding schedules might take a while to get accurate
1
119,077
12,014,037,169
IssuesEvent
2020-04-10 10:20:05
GTFB/Altrp
https://api.github.com/repos/GTFB/Altrp
closed
Разработка окна редактора
documentation
- [x] Создание новой страница для предосмотра шаблона для вставки в iframe - [x] компоновка данных шаблона при перетаскивании виджета
1.0
Разработка окна редактора - - [x] Создание новой страница для предосмотра шаблона для вставки в iframe - [x] компоновка данных шаблона при перетаскивании виджета
non_main
разработка окна редактора создание новой страница для предосмотра шаблона для вставки в iframe компоновка данных шаблона при перетаскивании виджета
0
518
3,911,472,251
IssuesEvent
2016-04-20 06:05:21
zendframework/zend-ldap
https://api.github.com/repos/zendframework/zend-ldap
closed
incorrect default value for 'port' option
awaiting maintainer response bug
Hi, according to the documentation the [Server Options](http://framework.zend.com/manual/current/en/modules/zend.authentication.adapter.ldap.html#server-options) 'port' parameter say: >The port on which the LDAP server is listening. If useSsl is TRUE, the default port value is 636. If useSsl is FALSE, the default port value is 389. with the following code: ``` $options = [ 'host' => 's0.foo.net', // 'port' => '389', 'useStartTls' => 'false', 'accountDomainName' => 'foo.net', 'accountDomainNameShort' => 'FOO', 'accountCanonicalForm' => '4', 'baseDn' => 'CN=user1,DC=foo,DC=net', 'allowEmptyPassword' => false ] $ldap = new Ldap($options); $ldap->bind('myuser','mypwd') ``` i get the exception: `Failed to connect to LDAP server: s0.foo.net:0` ``` exception 'Zend\Ldap\Exception\LdapException' with message 'Failed to connect to LDAP server: s0.foo.net:0' in /home/dockerdev/app/vendor/zendframework/zend-ldap/src/Ldap.php:748 Stack trace: #0 /home/dockerdev/app/vendor/zendframework/zend-ldap/src/Ldap.php(812): Zend\Ldap\Ldap->connect() #1 /home/dockerdev/app/module/DipvvfModule/src/DipvvfModule/Check/LdapServiceCheck.php(57): Zend\Ldap\Ldap->bind('intranet@no.dip...', 'Intr4n3t101177!') #2 /home/dockerdev/app/vendor/zendframework/zenddiagnostics/src/ZendDiagnostics/Runner/Runner.php(123): DipvvfModule\Check\LdapServiceCheck->check() #3 /home/dockerdev/app/vendor/zendframework/zftool/src/ZFTool/Diagnostics/Runner.php(43): ZendDiagnostics\Runner\Runner->run(NULL) #4 /home/dockerdev/app/vendor/zendframework/zftool/src/ZFTool/Controller/DiagnosticsController.php(234): ZFTool\Diagnostics\Runner->run() #5 /home/dockerdev/app/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): ZFTool\Controller\DiagnosticsController->runAction() #6 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch(Object(Zend\Mvc\MvcEvent)) #7 /home/dockerdev/app/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\Mvc\MvcEvent)) #8 /home/dockerdev/app/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure)) #9 /home/dockerdev/app/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure)) #10 /home/dockerdev/app/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch(Object(Zend\Console\Request), Object(Zend\Console\Response)) #11 [internal function]: Zend\Mvc\DispatchListener->onDispatch(Object(Zend\Mvc\MvcEvent)) #12 /home/dockerdev/app/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\Mvc\MvcEvent)) #13 /home/dockerdev/app/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure)) #14 /home/dockerdev/app/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure)) #15 /home/dockerdev/app/tools/zf.php(53): Zend\Mvc\Application->run() #16 {main} ``` if we uncomment the 'port' option everything work fine.
True
incorrect default value for 'port' option - Hi, according to the documentation the [Server Options](http://framework.zend.com/manual/current/en/modules/zend.authentication.adapter.ldap.html#server-options) 'port' parameter say: >The port on which the LDAP server is listening. If useSsl is TRUE, the default port value is 636. If useSsl is FALSE, the default port value is 389. with the following code: ``` $options = [ 'host' => 's0.foo.net', // 'port' => '389', 'useStartTls' => 'false', 'accountDomainName' => 'foo.net', 'accountDomainNameShort' => 'FOO', 'accountCanonicalForm' => '4', 'baseDn' => 'CN=user1,DC=foo,DC=net', 'allowEmptyPassword' => false ] $ldap = new Ldap($options); $ldap->bind('myuser','mypwd') ``` i get the exception: `Failed to connect to LDAP server: s0.foo.net:0` ``` exception 'Zend\Ldap\Exception\LdapException' with message 'Failed to connect to LDAP server: s0.foo.net:0' in /home/dockerdev/app/vendor/zendframework/zend-ldap/src/Ldap.php:748 Stack trace: #0 /home/dockerdev/app/vendor/zendframework/zend-ldap/src/Ldap.php(812): Zend\Ldap\Ldap->connect() #1 /home/dockerdev/app/module/DipvvfModule/src/DipvvfModule/Check/LdapServiceCheck.php(57): Zend\Ldap\Ldap->bind('intranet@no.dip...', 'Intr4n3t101177!') #2 /home/dockerdev/app/vendor/zendframework/zenddiagnostics/src/ZendDiagnostics/Runner/Runner.php(123): DipvvfModule\Check\LdapServiceCheck->check() #3 /home/dockerdev/app/vendor/zendframework/zftool/src/ZFTool/Diagnostics/Runner.php(43): ZendDiagnostics\Runner\Runner->run(NULL) #4 /home/dockerdev/app/vendor/zendframework/zftool/src/ZFTool/Controller/DiagnosticsController.php(234): ZFTool\Diagnostics\Runner->run() #5 /home/dockerdev/app/vendor/zendframework/zend-mvc/src/Controller/AbstractActionController.php(82): ZFTool\Controller\DiagnosticsController->runAction() #6 [internal function]: Zend\Mvc\Controller\AbstractActionController->onDispatch(Object(Zend\Mvc\MvcEvent)) #7 /home/dockerdev/app/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\Mvc\MvcEvent)) #8 /home/dockerdev/app/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure)) #9 /home/dockerdev/app/vendor/zendframework/zend-mvc/src/Controller/AbstractController.php(118): Zend\EventManager\EventManager->trigger('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure)) #10 /home/dockerdev/app/vendor/zendframework/zend-mvc/src/DispatchListener.php(93): Zend\Mvc\Controller\AbstractController->dispatch(Object(Zend\Console\Request), Object(Zend\Console\Response)) #11 [internal function]: Zend\Mvc\DispatchListener->onDispatch(Object(Zend\Mvc\MvcEvent)) #12 /home/dockerdev/app/vendor/zendframework/zend-eventmanager/src/EventManager.php(444): call_user_func(Array, Object(Zend\Mvc\MvcEvent)) #13 /home/dockerdev/app/vendor/zendframework/zend-eventmanager/src/EventManager.php(205): Zend\EventManager\EventManager->triggerListeners('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure)) #14 /home/dockerdev/app/vendor/zendframework/zend-mvc/src/Application.php(314): Zend\EventManager\EventManager->trigger('dispatch', Object(Zend\Mvc\MvcEvent), Object(Closure)) #15 /home/dockerdev/app/tools/zf.php(53): Zend\Mvc\Application->run() #16 {main} ``` if we uncomment the 'port' option everything work fine.
main
incorrect default value for port option hi according to the documentation the port parameter say the port on which the ldap server is listening if usessl is true the default port value is if usessl is false the default port value is with the following code options host foo net port usestarttls false accountdomainname foo net accountdomainnameshort foo accountcanonicalform basedn cn dc foo dc net allowemptypassword false ldap new ldap options ldap bind myuser mypwd i get the exception failed to connect to ldap server foo net exception zend ldap exception ldapexception with message failed to connect to ldap server foo net in home dockerdev app vendor zendframework zend ldap src ldap php stack trace home dockerdev app vendor zendframework zend ldap src ldap php zend ldap ldap connect home dockerdev app module dipvvfmodule src dipvvfmodule check ldapservicecheck php zend ldap ldap bind intranet no dip home dockerdev app vendor zendframework zenddiagnostics src zenddiagnostics runner runner php dipvvfmodule check ldapservicecheck check home dockerdev app vendor zendframework zftool src zftool diagnostics runner php zenddiagnostics runner runner run null home dockerdev app vendor zendframework zftool src zftool controller diagnosticscontroller php zftool diagnostics runner run home dockerdev app vendor zendframework zend mvc src controller abstractactioncontroller php zftool controller diagnosticscontroller runaction zend mvc controller abstractactioncontroller ondispatch object zend mvc mvcevent home dockerdev app vendor zendframework zend eventmanager src eventmanager php call user func array object zend mvc mvcevent home dockerdev app vendor zendframework zend eventmanager src eventmanager php zend eventmanager eventmanager triggerlisteners dispatch object zend mvc mvcevent object closure home dockerdev app vendor zendframework zend mvc src controller abstractcontroller php zend eventmanager eventmanager trigger dispatch object zend mvc mvcevent object closure home dockerdev app vendor zendframework zend mvc src dispatchlistener php zend mvc controller abstractcontroller dispatch object zend console request object zend console response zend mvc dispatchlistener ondispatch object zend mvc mvcevent home dockerdev app vendor zendframework zend eventmanager src eventmanager php call user func array object zend mvc mvcevent home dockerdev app vendor zendframework zend eventmanager src eventmanager php zend eventmanager eventmanager triggerlisteners dispatch object zend mvc mvcevent object closure home dockerdev app vendor zendframework zend mvc src application php zend eventmanager eventmanager trigger dispatch object zend mvc mvcevent object closure home dockerdev app tools zf php zend mvc application run main if we uncomment the port option everything work fine
1
1,602
6,572,385,913
IssuesEvent
2017-09-11 01:54:57
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
There are now three ovirt modules. Is there a path forward?
affects_2.3 bug_report cloud waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/misc/ovirt.py cloud/misc/rhevm.py (new in 2.2) cloud/ovirt/ovirt_vms.py (new in 2.2) ##### SUMMARY There are now three modules for the same thing in extras. Ideally, there's some sort of path forward to a single module. CC @TimothyVandenbrande @machacekondra @vincentvdk
True
There are now three ovirt modules. Is there a path forward? - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME cloud/misc/ovirt.py cloud/misc/rhevm.py (new in 2.2) cloud/ovirt/ovirt_vms.py (new in 2.2) ##### SUMMARY There are now three modules for the same thing in extras. Ideally, there's some sort of path forward to a single module. CC @TimothyVandenbrande @machacekondra @vincentvdk
main
there are now three ovirt modules is there a path forward issue type bug report component name cloud misc ovirt py cloud misc rhevm py new in cloud ovirt ovirt vms py new in summary there are now three modules for the same thing in extras ideally there s some sort of path forward to a single module cc timothyvandenbrande machacekondra vincentvdk
1
3,710
15,210,475,553
IssuesEvent
2021-02-17 07:31:45
skybasedb/skybase
https://api.github.com/repos/skybasedb/skybase
closed
Security: Merge v0.5-hotfix.1 into next
A-independent C-security D-server P-high S-waiting-on-maintainers
**Introduction** As we notified via [this tweet](https://twitter.com/onskybase/status/1361193440558596097), a security vulnerability has been identified in v0.5.0 and a corresponding patch has been released. **Patch link**: This patch can be downloaded by users for their platform from this link: https://dl.skybasedb.com/v0.5-hotfix.1/. **Affected versions** This vulnerability only affects v0.5.0 of the database server and all such deployments are immediately requested to deploy this hotfix. **Pending actions** We will be releasing a full length security and versioning policy soon. At the same time, we'll be releasing the security advisory and disclose the bug on the embargo date of 17 Feb 2020, 0630 UTC. This issue exists to track the status of the successful merging of this hotfix into the primary branch.
True
Security: Merge v0.5-hotfix.1 into next - **Introduction** As we notified via [this tweet](https://twitter.com/onskybase/status/1361193440558596097), a security vulnerability has been identified in v0.5.0 and a corresponding patch has been released. **Patch link**: This patch can be downloaded by users for their platform from this link: https://dl.skybasedb.com/v0.5-hotfix.1/. **Affected versions** This vulnerability only affects v0.5.0 of the database server and all such deployments are immediately requested to deploy this hotfix. **Pending actions** We will be releasing a full length security and versioning policy soon. At the same time, we'll be releasing the security advisory and disclose the bug on the embargo date of 17 Feb 2020, 0630 UTC. This issue exists to track the status of the successful merging of this hotfix into the primary branch.
main
security merge hotfix into next introduction as we notified via a security vulnerability has been identified in and a corresponding patch has been released patch link this patch can be downloaded by users for their platform from this link affected versions this vulnerability only affects of the database server and all such deployments are immediately requested to deploy this hotfix pending actions we will be releasing a full length security and versioning policy soon at the same time we ll be releasing the security advisory and disclose the bug on the embargo date of feb utc this issue exists to track the status of the successful merging of this hotfix into the primary branch
1
4,883
25,046,635,289
IssuesEvent
2022-11-05 10:35:08
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
gas_transfer_coefficient does nothing
Maintainability/Hinders improvements Oversight
[Round ID]: # (If you discovered this issue from playing tgstation hosted servers:) [Round ID]: # (**INCLUDE THE ROUND ID**) [Round ID]: # (It can be found in the Status panel or retrieved from https://atlantaned.space/statbus/round.php ! The round id let's us look up valuable information and logs for the round the bug happened.) [Testmerges]: # (If you believe the issue to be caused by a test merge [OOC tab -> Show Server Revision], report it in the pull request's comment section instead.) [Reproduction]: # (Explain your issue in detail, including the steps to reproduce it. Issues without proper reproduction steps or explanation are open to being ignored/closed by maintainers.) [For Admins]: # (Oddities induced by var-edits and other admin tools are not necessarily bugs. Verify that your issues occur under regular circumstances before reporting them.) This variable does absolutely nothing, either we remove it or give a function, I'm more interested in the latter.
True
gas_transfer_coefficient does nothing - [Round ID]: # (If you discovered this issue from playing tgstation hosted servers:) [Round ID]: # (**INCLUDE THE ROUND ID**) [Round ID]: # (It can be found in the Status panel or retrieved from https://atlantaned.space/statbus/round.php ! The round id let's us look up valuable information and logs for the round the bug happened.) [Testmerges]: # (If you believe the issue to be caused by a test merge [OOC tab -> Show Server Revision], report it in the pull request's comment section instead.) [Reproduction]: # (Explain your issue in detail, including the steps to reproduce it. Issues without proper reproduction steps or explanation are open to being ignored/closed by maintainers.) [For Admins]: # (Oddities induced by var-edits and other admin tools are not necessarily bugs. Verify that your issues occur under regular circumstances before reporting them.) This variable does absolutely nothing, either we remove it or give a function, I'm more interested in the latter.
main
gas transfer coefficient does nothing if you discovered this issue from playing tgstation hosted servers include the round id it can be found in the status panel or retrieved from the round id let s us look up valuable information and logs for the round the bug happened if you believe the issue to be caused by a test merge report it in the pull request s comment section instead explain your issue in detail including the steps to reproduce it issues without proper reproduction steps or explanation are open to being ignored closed by maintainers oddities induced by var edits and other admin tools are not necessarily bugs verify that your issues occur under regular circumstances before reporting them this variable does absolutely nothing either we remove it or give a function i m more interested in the latter
1
661
4,179,152,437
IssuesEvent
2016-06-22 09:42:28
Particular/NServiceBus.SqlServer
https://api.github.com/repos/Particular/NServiceBus.SqlServer
opened
Create a AWS lab environment for SQL Server testing
State: In Progress - Maintainer Prio
* 3 VMs, each with SQL Server instance * DTC configured between all 3 VMS
True
Create a AWS lab environment for SQL Server testing - * 3 VMs, each with SQL Server instance * DTC configured between all 3 VMS
main
create a aws lab environment for sql server testing vms each with sql server instance dtc configured between all vms
1
2,468
8,639,903,358
IssuesEvent
2018-11-23 22:33:07
F5OEO/rpitx
https://api.github.com/repos/F5OEO/rpitx
closed
cover dcf77
V1 related (not maintained)
Hi, it is possible extend frequency to 77 Khz ? If yes, I like to try something like https://github.com/CodingGhost/DCF77-Transmitter/blob/master/DCF77_Protocoll_.ino With some hint I can do the job.
True
cover dcf77 - Hi, it is possible extend frequency to 77 Khz ? If yes, I like to try something like https://github.com/CodingGhost/DCF77-Transmitter/blob/master/DCF77_Protocoll_.ino With some hint I can do the job.
main
cover hi it is possible extend frequency to khz if yes i like to try something like with some hint i can do the job
1
92,681
3,872,900,225
IssuesEvent
2016-04-11 15:15:54
jcgregorio/httplib2
https://api.github.com/repos/jcgregorio/httplib2
closed
HEAD requests with redirects become GETs with cache
bug imported Priority-Medium
_From [JNR...@gmail.com](https://code.google.com/u/114612663764561724112/) on August 14, 2011 09:34:20_ What steps will reproduce the problem? 1. http = httplib2.Http(cache='SOME_LOCATION') 2. h, r = http.request(' http://bit.ly/qpNbiv' , method='HEAD') 3. len(r) 186754 What is the expected output? What do you see instead? I expect a zero-length result What version of the product are you using? On what operating system? 0.7.1 Please provide any additional information below. This looks to me like it may be bug #123 resurfacing. I actually struggled for a few minutes trying to decide whether to comment there, or open a new bug. I've attached an interpreter session log with debugging enabled. Thanks, James **Attachment:** [test.txt](http://code.google.com/p/httplib2/issues/detail?id=163) _Original issue: http://code.google.com/p/httplib2/issues/detail?id=163_
1.0
HEAD requests with redirects become GETs with cache - _From [JNR...@gmail.com](https://code.google.com/u/114612663764561724112/) on August 14, 2011 09:34:20_ What steps will reproduce the problem? 1. http = httplib2.Http(cache='SOME_LOCATION') 2. h, r = http.request(' http://bit.ly/qpNbiv' , method='HEAD') 3. len(r) 186754 What is the expected output? What do you see instead? I expect a zero-length result What version of the product are you using? On what operating system? 0.7.1 Please provide any additional information below. This looks to me like it may be bug #123 resurfacing. I actually struggled for a few minutes trying to decide whether to comment there, or open a new bug. I've attached an interpreter session log with debugging enabled. Thanks, James **Attachment:** [test.txt](http://code.google.com/p/httplib2/issues/detail?id=163) _Original issue: http://code.google.com/p/httplib2/issues/detail?id=163_
non_main
head requests with redirects become gets with cache from on august what steps will reproduce the problem http http cache some location h r http request method head len r what is the expected output what do you see instead i expect a zero length result what version of the product are you using on what operating system please provide any additional information below this looks to me like it may be bug resurfacing i actually struggled for a few minutes trying to decide whether to comment there or open a new bug i ve attached an interpreter session log with debugging enabled thanks james attachment original issue
0
2,677
9,215,216,810
IssuesEvent
2019-03-11 01:57:56
coq-community/manifesto
https://api.github.com/repos/coq-community/manifesto
opened
Move CertiCrypt to Coq-community
maintainer-wanted move-project
## Move a project to coq-community ## **Project name:** CertiCrypt **Initial author(s):* Gilles Barthe, Benjamin Grégoire, Federico Olmedo, Santiago Zanella-Béguelin, Daniel Hedin, and Sylvain Heraud. **Current URL:** https://github.com/EasyCrypt/certicrypt **Kind:** pure Coq library **License:** CeCILL-B **Description:** A framework that enables construction and verification of code-based proofs about cryptographic systems. **Status:** unmaintained **New maintainer:** looking for a volunteer More about the project: http://certicrypt.gforge.inria.fr
True
Move CertiCrypt to Coq-community - ## Move a project to coq-community ## **Project name:** CertiCrypt **Initial author(s):* Gilles Barthe, Benjamin Grégoire, Federico Olmedo, Santiago Zanella-Béguelin, Daniel Hedin, and Sylvain Heraud. **Current URL:** https://github.com/EasyCrypt/certicrypt **Kind:** pure Coq library **License:** CeCILL-B **Description:** A framework that enables construction and verification of code-based proofs about cryptographic systems. **Status:** unmaintained **New maintainer:** looking for a volunteer More about the project: http://certicrypt.gforge.inria.fr
main
move certicrypt to coq community move a project to coq community project name certicrypt initial author s gilles barthe benjamin grégoire federico olmedo santiago zanella béguelin daniel hedin and sylvain heraud current url kind pure coq library license cecill b description a framework that enables construction and verification of code based proofs about cryptographic systems status unmaintained new maintainer looking for a volunteer more about the project
1
4,423
22,783,138,119
IssuesEvent
2022-07-08 23:03:04
Clever-ISA/Clever-ISA
https://api.github.com/repos/Clever-ISA/Clever-ISA
closed
Include Vector Shuffle instruction
X-vector S-blocked-on-maintainer I-enhancement V-1.0
X-vector should include a shuffle instruction to allow both static and dynamic reordering of vector elements.
True
Include Vector Shuffle instruction - X-vector should include a shuffle instruction to allow both static and dynamic reordering of vector elements.
main
include vector shuffle instruction x vector should include a shuffle instruction to allow both static and dynamic reordering of vector elements
1
3,771
15,835,980,448
IssuesEvent
2021-04-06 18:42:04
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
closed
discontinue Python 2 support
maintainability upstream
Python 2 reaches end of life on **1 January, 2020**, according to [PEP 373](https://www.python.org/dev/peps/pep-0373/) and https://github.com/python/devguide/pull/344 based on https://mail.python.org/pipermail/python-dev/2018-March/152348.html. Many of our dependencies (notably numpy, see [Plan for dropping Python 2.7 support](https://docs.scipy.org/doc/numpy-1.14.0/neps/dropping-python2.7-proposal.html)) have ceased Python 2.7 support in new releases or will also drop Python 2.7 in 2020. I know that science is rolling slowly and surely some scientific projects will continue with Python 2.7 beyond 2020. MDAnalysis has been supporting Python 2 and Python 3 now for a while. However, given how precious developer time is, I think **we also need to decide that we will stop caring for 2.7 after the official Python 2.7 drop date.** We need to decide how to do this. I am opening this issue with the intent that it gets edited into an actionable list of items.
True
discontinue Python 2 support - Python 2 reaches end of life on **1 January, 2020**, according to [PEP 373](https://www.python.org/dev/peps/pep-0373/) and https://github.com/python/devguide/pull/344 based on https://mail.python.org/pipermail/python-dev/2018-March/152348.html. Many of our dependencies (notably numpy, see [Plan for dropping Python 2.7 support](https://docs.scipy.org/doc/numpy-1.14.0/neps/dropping-python2.7-proposal.html)) have ceased Python 2.7 support in new releases or will also drop Python 2.7 in 2020. I know that science is rolling slowly and surely some scientific projects will continue with Python 2.7 beyond 2020. MDAnalysis has been supporting Python 2 and Python 3 now for a while. However, given how precious developer time is, I think **we also need to decide that we will stop caring for 2.7 after the official Python 2.7 drop date.** We need to decide how to do this. I am opening this issue with the intent that it gets edited into an actionable list of items.
main
discontinue python support python reaches end of life on january according to and based on many of our dependencies notably numpy see have ceased python support in new releases or will also drop python in i know that science is rolling slowly and surely some scientific projects will continue with python beyond mdanalysis has been supporting python and python now for a while however given how precious developer time is i think we also need to decide that we will stop caring for after the official python drop date we need to decide how to do this i am opening this issue with the intent that it gets edited into an actionable list of items
1
28,623
4,424,492,026
IssuesEvent
2016-08-16 12:45:54
centreon/centreon
https://api.github.com/repos/centreon/centreon
closed
[Config Broker] Command file path is not display
BetaTest Kind/Bug Status/Implemented
--------------------------------------------------- BUG REPORT INFORMATION --------------------------------------------------- * Centreon web 2.7.x/2.8.x **Steps to reproduce the issue:** 1. Define command file in "Administration > Parameters > Monitoring" and set value for "Centreon Broker socket path" 2. Edit a Centreon Broker configuration, in "General" tab, command_file field is empty.
1.0
[Config Broker] Command file path is not display - --------------------------------------------------- BUG REPORT INFORMATION --------------------------------------------------- * Centreon web 2.7.x/2.8.x **Steps to reproduce the issue:** 1. Define command file in "Administration > Parameters > Monitoring" and set value for "Centreon Broker socket path" 2. Edit a Centreon Broker configuration, in "General" tab, command_file field is empty.
non_main
command file path is not display bug report information centreon web x x steps to reproduce the issue define command file in administration parameters monitoring and set value for centreon broker socket path edit a centreon broker configuration in general tab command file field is empty
0
30,358
7,191,215,661
IssuesEvent
2018-02-02 20:07:51
Serrin/Celestra
https://api.github.com/repos/Serrin/Celestra
closed
Add function isElement in v1.18.0
closed - done or fixed code documentation type - enhancement
``` function isElement (v) { return typeof v === "object" && v.nodeType === 1; } ```
1.0
Add function isElement in v1.18.0 - ``` function isElement (v) { return typeof v === "object" && v.nodeType === 1; } ```
non_main
add function iselement in function iselement v return typeof v object v nodetype
0
4,244
21,040,677,852
IssuesEvent
2022-03-31 12:03:12
aws/serverless-application-model
https://api.github.com/repos/aws/serverless-application-model
closed
Custom Event Bus with Schedule Event Type
type/feature stage/pm-review maintainer/need-followup
<!-- Before reporting a new issue, make sure we don't have any duplicates already open or closed by searching the issues list. If there is a duplicate, re-open or add a comment to the existing issue instead of creating a new one. If you are reporting a bug, make sure to include relevant information asked below to help with debugging. ## GENERAL HELP QUESTIONS ## Github Issues is for bug reports and feature requests. If you have general support questions, the following locations are a good place: - Post a question in StackOverflow with "aws-sam" tag --> **Description:** Schedule event type does not support custom event bus https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-schedule.html **Steps to reproduce the issue:** 1. Tried to set EventBusName property with Schedule event and the transform failed to deploy **Observed result:** Deploy fails with 'property EventBusName not defined for resource of type Schedule' **Expected result:** Expected to be able to specify custom bus name with Schedule event (similar to the EventBridge rule type)
True
Custom Event Bus with Schedule Event Type - <!-- Before reporting a new issue, make sure we don't have any duplicates already open or closed by searching the issues list. If there is a duplicate, re-open or add a comment to the existing issue instead of creating a new one. If you are reporting a bug, make sure to include relevant information asked below to help with debugging. ## GENERAL HELP QUESTIONS ## Github Issues is for bug reports and feature requests. If you have general support questions, the following locations are a good place: - Post a question in StackOverflow with "aws-sam" tag --> **Description:** Schedule event type does not support custom event bus https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-schedule.html **Steps to reproduce the issue:** 1. Tried to set EventBusName property with Schedule event and the transform failed to deploy **Observed result:** Deploy fails with 'property EventBusName not defined for resource of type Schedule' **Expected result:** Expected to be able to specify custom bus name with Schedule event (similar to the EventBridge rule type)
main
custom event bus with schedule event type before reporting a new issue make sure we don t have any duplicates already open or closed by searching the issues list if there is a duplicate re open or add a comment to the existing issue instead of creating a new one if you are reporting a bug make sure to include relevant information asked below to help with debugging general help questions github issues is for bug reports and feature requests if you have general support questions the following locations are a good place post a question in stackoverflow with aws sam tag description schedule event type does not support custom event bus steps to reproduce the issue tried to set eventbusname property with schedule event and the transform failed to deploy observed result deploy fails with property eventbusname not defined for resource of type schedule expected result expected to be able to specify custom bus name with schedule event similar to the eventbridge rule type
1
5,429
27,239,589,497
IssuesEvent
2023-02-21 19:09:29
mozilla/foundation.mozilla.org
https://api.github.com/repos/mozilla/foundation.mozilla.org
closed
Upgrade Wagtail to 3.0
engineering maintain
## Description Goal of this ticket is to upgrade Wagtail to version 3.0. This ticket is a follow up to the Spike #8534. Before we can upgrade Wagtail, we need to address the following tickets: - [x] Upgrade `wagtail-localize-git` to 0.13 (#9675) - [x] Upgrade `wagtail-inventory` to 1.6 (#9676) - [x] Remove Airtable integration (#9147) During upgrade we need to address the following: - Changes to module paths. - Basically all imports to core wagtail modules need to be updated. - There is a management command provided to update all import paths. - Removal of special-purpose field panel types - All special field panels (like `RichTextFieldPanel` , `StreamFieldPanel`, etc.) can be replaced with the basic `FieldPanel`. The detection now happens automatically. - Use `use_json_field` argument for `StreamField` - All uses of `StreamField`should be updated to include the argument `use_json_field=True`. - Running the migrations might take a while, because of some restructuring internally. We will see this on staging, since staging has quite similar data as production. See also: https://docs.wagtail.org/en/stable/releases/3.0.html ## Dev notes - [x] Create upgrade branch from main - [x] Check your project’s console output for any deprecation warnings, and fix them where necessary `python -Wa manage.py check` - [x] Check the new version’s release notes - [x] Check the compatible Django / Python versions [table](https://docs.wagtail.io/en/stable/releases/upgrading.html#compatible-django-python-versions), for any dependencies that need upgrading first; - [x] Upgrade supporting requirements (Python, Django) if necessary - [x] Upgrade Wagtail - [x] Make new migration (might result in none). - [x] Migrate database changes (locally) - [x] Implement needed changes from upgrade considerations (see above) - [x] Perform testing - [x] Run test suites - [ ] Smoke test site / testing journeys (manually on the site) - [ ] Smoke test admin (Click around in the admin to see if anything is broken) - [ ] Check for new deprecations `python -Wa manage.py check` and fix if necessary ## Acceptance criteria - [ ] Wagtail is upgraded to version 3.0 - [ ] ticket for Wagtail 4.1 upgrade is created
True
Upgrade Wagtail to 3.0 - ## Description Goal of this ticket is to upgrade Wagtail to version 3.0. This ticket is a follow up to the Spike #8534. Before we can upgrade Wagtail, we need to address the following tickets: - [x] Upgrade `wagtail-localize-git` to 0.13 (#9675) - [x] Upgrade `wagtail-inventory` to 1.6 (#9676) - [x] Remove Airtable integration (#9147) During upgrade we need to address the following: - Changes to module paths. - Basically all imports to core wagtail modules need to be updated. - There is a management command provided to update all import paths. - Removal of special-purpose field panel types - All special field panels (like `RichTextFieldPanel` , `StreamFieldPanel`, etc.) can be replaced with the basic `FieldPanel`. The detection now happens automatically. - Use `use_json_field` argument for `StreamField` - All uses of `StreamField`should be updated to include the argument `use_json_field=True`. - Running the migrations might take a while, because of some restructuring internally. We will see this on staging, since staging has quite similar data as production. See also: https://docs.wagtail.org/en/stable/releases/3.0.html ## Dev notes - [x] Create upgrade branch from main - [x] Check your project’s console output for any deprecation warnings, and fix them where necessary `python -Wa manage.py check` - [x] Check the new version’s release notes - [x] Check the compatible Django / Python versions [table](https://docs.wagtail.io/en/stable/releases/upgrading.html#compatible-django-python-versions), for any dependencies that need upgrading first; - [x] Upgrade supporting requirements (Python, Django) if necessary - [x] Upgrade Wagtail - [x] Make new migration (might result in none). - [x] Migrate database changes (locally) - [x] Implement needed changes from upgrade considerations (see above) - [x] Perform testing - [x] Run test suites - [ ] Smoke test site / testing journeys (manually on the site) - [ ] Smoke test admin (Click around in the admin to see if anything is broken) - [ ] Check for new deprecations `python -Wa manage.py check` and fix if necessary ## Acceptance criteria - [ ] Wagtail is upgraded to version 3.0 - [ ] ticket for Wagtail 4.1 upgrade is created
main
upgrade wagtail to description goal of this ticket is to upgrade wagtail to version this ticket is a follow up to the spike before we can upgrade wagtail we need to address the following tickets upgrade wagtail localize git to upgrade wagtail inventory to remove airtable integration during upgrade we need to address the following changes to module paths basically all imports to core wagtail modules need to be updated there is a management command provided to update all import paths removal of special purpose field panel types all special field panels like richtextfieldpanel streamfieldpanel etc can be replaced with the basic fieldpanel the detection now happens automatically use use json field argument for streamfield all uses of  streamfield should be updated to include the argument  use json field true running the migrations might take a while because of some restructuring internally we will see this on staging since staging has quite similar data as production see also dev notes create upgrade branch from main check your project’s console output for any deprecation warnings and fix them where necessary python wa manage py check check the new version’s release notes check the compatible django python versions for any dependencies that need upgrading first upgrade supporting requirements python django if necessary upgrade wagtail make new migration might result in none migrate database changes locally implement needed changes from upgrade considerations see above perform testing run test suites smoke test site testing journeys manually on the site smoke test admin click around in the admin to see if anything is broken check for new deprecations python wa manage py check and fix if necessary acceptance criteria wagtail is upgraded to version ticket for wagtail upgrade is created
1
2,676
9,215,137,036
IssuesEvent
2019-03-11 01:28:17
coq-community/manifesto
https://api.github.com/repos/coq-community/manifesto
opened
Move hybrid to Coq-community
maintainer-wanted move-project
## Move a project to coq-community ## **Project name:** hybrid **Initial author(s):** Herman Geuvers, Dan Synek, Adam Koprowski, and Eelis van der Weegen **Current URL:** https://github.com/Eelis/hybrid **Kind:** Coq library and extractable program **License:** unknown **Description:** A prover for hybrid systems, formalized using CoRN, MathClasses, and CoLoR. **Status:** unmaintained since 2012 **New maintainer:** looking for a volunteer
True
Move hybrid to Coq-community - ## Move a project to coq-community ## **Project name:** hybrid **Initial author(s):** Herman Geuvers, Dan Synek, Adam Koprowski, and Eelis van der Weegen **Current URL:** https://github.com/Eelis/hybrid **Kind:** Coq library and extractable program **License:** unknown **Description:** A prover for hybrid systems, formalized using CoRN, MathClasses, and CoLoR. **Status:** unmaintained since 2012 **New maintainer:** looking for a volunteer
main
move hybrid to coq community move a project to coq community project name hybrid initial author s herman geuvers dan synek adam koprowski and eelis van der weegen current url kind coq library and extractable program license unknown description a prover for hybrid systems formalized using corn mathclasses and color status unmaintained since new maintainer looking for a volunteer
1
98,882
30,211,229,700
IssuesEvent
2023-07-05 12:56:07
Crocoblock/suggestions
https://api.github.com/repos/Crocoblock/suggestions
closed
Exclude by IDs for Categories Grid widget / jet woobuilder.
JetWooBuilder
Hi The Categories Grid widget seems to be missing an important option. In the Products Grid widget, when we set the Query by to **All**, there is an **Exclude by IDs** option to exclude the desired products : https://prnt.sc/6E3Bs3_U05Br But in the Categories Grid widget, when we set Query by to **All**, no option for Exclude by IDs is displayed. : https://prnt.sc/Lx-9AT_k2KcN Please add this option. it's too important.
1.0
Exclude by IDs for Categories Grid widget / jet woobuilder. - Hi The Categories Grid widget seems to be missing an important option. In the Products Grid widget, when we set the Query by to **All**, there is an **Exclude by IDs** option to exclude the desired products : https://prnt.sc/6E3Bs3_U05Br But in the Categories Grid widget, when we set Query by to **All**, no option for Exclude by IDs is displayed. : https://prnt.sc/Lx-9AT_k2KcN Please add this option. it's too important.
non_main
exclude by ids for categories grid widget jet woobuilder hi the categories grid widget seems to be missing an important option in the products grid widget when we set the query by to all there is an exclude by ids option to exclude the desired products but in the categories grid widget when we set query by to all no option for exclude by ids is displayed please add this option it s too important
0
751
4,351,333,863
IssuesEvent
2016-07-31 20:02:23
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Docker hostname doesn't work with net: host
bug_report cloud docker waiting_on_maintainer
## Issue Type Bug Report ## Component Name _docker module ## Ansible Version ``` ansible --version ansible 2.0.0.2 config file = configured module search path = Default w/o overrides ``` ## Ansible Configuration No configuration changes ## Environment I'm running ansible inside this docker container: https://hub.docker.com/r/williamyeh/ansible/ My `Dockerfile`: ``` FROM williamyeh/ansible:debian8 RUN apt-get update && \ apt-get install -y ssh \ rsync \ python-httplib2 # Install Docker adn Docker Compose Galaxy modules # TODO: Download roles with specific version # Github Issue: https://github.com/ansible/ansible/issues/13886 ENV ansible_docker_version=1.6.0 RUN ansible-galaxy install franklinkim.docker ENV ansible_docker_compose_version=1.2.1 RUN ansible-galaxy install franklinkim.docker-compose ENV ansible_node_version=2.0.2 RUN ansible-galaxy install geerlingguy.nodejs ENV ansible_ansistrano_deploy_version=1.3.0 ansible_ansistrano_rollback_version=1.2.0 RUN ansible-galaxy install carlosbuenosvinos.ansistrano-deploy carlosbuenosvinos.ansistrano-rollback ``` ## Summary When I try this: ``` - name: Start proxy container docker: name: proxy hostname: proxy image: user/proxy-nginx state: started restart_policy: always net: host volumes: - "/home/user/nginx/config:/etc/nginx:ro" ``` I get the following error: ``` FAILED! => {"changed": false, "failed": true, "msg": "Docker API Error: Conflicting options: -h and the network mode (--net)"} ``` When I remove the `hostname` property the issue is gone. ## Steps To Reproduce Build container: ``` docker build --no-cache=true --tag='you/ansible-provisioning:0.0.3' . ``` Start container (link key and playbook directory): ``` docker run --rm \ -it --name=ansible \ -v $project_dirirectory:/ansible:ro \ -v $ssh_key_path:/root/.ssh/id_rsa:ro \ -v $ssh_key_path.pub:/root/.ssh/id_rsa.pub:ro \ --workdir=/ansible you/ansible-provisioning:0.0.3 bash ``` Inside the container run: ``` ansible-playbook server-setup.yml -i hosts/hosts ``` ## Expected Results Run docker container with host name and configured to use the host network. ## Actual Results Error message: ``` FAILED! => {"changed": false, "failed": true, "msg": "Docker API Error: Conflicting options: -h and the network mode (--net)"} ``` Container is not started.
True
Docker hostname doesn't work with net: host - ## Issue Type Bug Report ## Component Name _docker module ## Ansible Version ``` ansible --version ansible 2.0.0.2 config file = configured module search path = Default w/o overrides ``` ## Ansible Configuration No configuration changes ## Environment I'm running ansible inside this docker container: https://hub.docker.com/r/williamyeh/ansible/ My `Dockerfile`: ``` FROM williamyeh/ansible:debian8 RUN apt-get update && \ apt-get install -y ssh \ rsync \ python-httplib2 # Install Docker adn Docker Compose Galaxy modules # TODO: Download roles with specific version # Github Issue: https://github.com/ansible/ansible/issues/13886 ENV ansible_docker_version=1.6.0 RUN ansible-galaxy install franklinkim.docker ENV ansible_docker_compose_version=1.2.1 RUN ansible-galaxy install franklinkim.docker-compose ENV ansible_node_version=2.0.2 RUN ansible-galaxy install geerlingguy.nodejs ENV ansible_ansistrano_deploy_version=1.3.0 ansible_ansistrano_rollback_version=1.2.0 RUN ansible-galaxy install carlosbuenosvinos.ansistrano-deploy carlosbuenosvinos.ansistrano-rollback ``` ## Summary When I try this: ``` - name: Start proxy container docker: name: proxy hostname: proxy image: user/proxy-nginx state: started restart_policy: always net: host volumes: - "/home/user/nginx/config:/etc/nginx:ro" ``` I get the following error: ``` FAILED! => {"changed": false, "failed": true, "msg": "Docker API Error: Conflicting options: -h and the network mode (--net)"} ``` When I remove the `hostname` property the issue is gone. ## Steps To Reproduce Build container: ``` docker build --no-cache=true --tag='you/ansible-provisioning:0.0.3' . ``` Start container (link key and playbook directory): ``` docker run --rm \ -it --name=ansible \ -v $project_dirirectory:/ansible:ro \ -v $ssh_key_path:/root/.ssh/id_rsa:ro \ -v $ssh_key_path.pub:/root/.ssh/id_rsa.pub:ro \ --workdir=/ansible you/ansible-provisioning:0.0.3 bash ``` Inside the container run: ``` ansible-playbook server-setup.yml -i hosts/hosts ``` ## Expected Results Run docker container with host name and configured to use the host network. ## Actual Results Error message: ``` FAILED! => {"changed": false, "failed": true, "msg": "Docker API Error: Conflicting options: -h and the network mode (--net)"} ``` Container is not started.
main
docker hostname doesn t work with net host issue type bug report component name docker module ansible version ansible version ansible config file configured module search path default w o overrides ansible configuration no configuration changes environment i m running ansible inside this docker container my dockerfile from williamyeh ansible run apt get update apt get install y ssh rsync python install docker adn docker compose galaxy modules todo download roles with specific version github issue env ansible docker version run ansible galaxy install franklinkim docker env ansible docker compose version run ansible galaxy install franklinkim docker compose env ansible node version run ansible galaxy install geerlingguy nodejs env ansible ansistrano deploy version ansible ansistrano rollback version run ansible galaxy install carlosbuenosvinos ansistrano deploy carlosbuenosvinos ansistrano rollback summary when i try this name start proxy container docker name proxy hostname proxy image user proxy nginx state started restart policy always net host volumes home user nginx config etc nginx ro i get the following error failed changed false failed true msg docker api error conflicting options h and the network mode net when i remove the hostname property the issue is gone steps to reproduce build container docker build no cache true tag you ansible provisioning start container link key and playbook directory docker run rm it name ansible v project dirirectory ansible ro v ssh key path root ssh id rsa ro v ssh key path pub root ssh id rsa pub ro workdir ansible you ansible provisioning bash inside the container run ansible playbook server setup yml i hosts hosts expected results run docker container with host name and configured to use the host network actual results error message failed changed false failed true msg docker api error conflicting options h and the network mode net container is not started
1
5,759
30,529,316,185
IssuesEvent
2023-07-19 13:22:36
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Switch custom AwsSignHttpRequestInterceptor in Opensearch exporter to library
kind/toil good first issue area/maintainability component/exporter
<!-- In case you have questions about our software we encourage everyone to participate in our community via the - Camunda Platform community forum https://forum.camunda.io/ or - Slack https://camunda-cloud.slack.com/ (For invite: https://camunda-slack-invite.herokuapp.com/) There you can exchange ideas with other Zeebe and Camunda Platform 8 users, as well as the product developers, and use the search to find answer to similar questions. This issue template is used by the Zeebe engineers to create general tasks. --> **Description** I recently came across this library: https://github.com/acm19/aws-request-signing-apache-interceptor This provides a request interceptor to take care AWS request signing. Currently we are maintaining our own interceptor. It would be great to switch it for this maintained library so we can forget about it ourselves.
True
Switch custom AwsSignHttpRequestInterceptor in Opensearch exporter to library - <!-- In case you have questions about our software we encourage everyone to participate in our community via the - Camunda Platform community forum https://forum.camunda.io/ or - Slack https://camunda-cloud.slack.com/ (For invite: https://camunda-slack-invite.herokuapp.com/) There you can exchange ideas with other Zeebe and Camunda Platform 8 users, as well as the product developers, and use the search to find answer to similar questions. This issue template is used by the Zeebe engineers to create general tasks. --> **Description** I recently came across this library: https://github.com/acm19/aws-request-signing-apache-interceptor This provides a request interceptor to take care AWS request signing. Currently we are maintaining our own interceptor. It would be great to switch it for this maintained library so we can forget about it ourselves.
main
switch custom awssignhttprequestinterceptor in opensearch exporter to library in case you have questions about our software we encourage everyone to participate in our community via the camunda platform community forum or slack for invite there you can exchange ideas with other zeebe and camunda platform users as well as the product developers and use the search to find answer to similar questions this issue template is used by the zeebe engineers to create general tasks description i recently came across this library this provides a request interceptor to take care aws request signing currently we are maintaining our own interceptor it would be great to switch it for this maintained library so we can forget about it ourselves
1
4,606
23,853,333,489
IssuesEvent
2022-09-06 20:11:51
mozilla/foundation.mozilla.org
https://api.github.com/repos/mozilla/foundation.mozilla.org
closed
Transition Sentry from hosted to cloud - foundation.mozilla.org
engineering Maintain
# Description We want to move the error tracking for the Foundation webstie from the [self-hosted Sentry](https://sentry.prod.mozaws.net) to the [Sentry cloud service](https://sentry.prod.mozaws.net). # Acceptance criteria - [ ] Backend errors are reported to the Sentry cloud version - [ ] Frontend errors are reported to the Sentry cloud version - [ ] Errors from production environment are reported to Sentry and labeled with the correct environment - [ ] Errors from staging environment are reported to Sentry and labeled with the correct environment - [ ] Releases are tagged in Sentry - [ ] Error are reported using the latest version of the Sentry SDK for backend errors - [ ] Error are reported using the latest version of the Sentry SDK for frontend errors - [ ] All devs on the project can view errors on Sentry - [ ] Devs are notified about new and frequent errors # Dev tasks - [ ] Setup a project on the Sentry cloud service - @jzinner will complete this. - [ ] Update the Sentry backend SDK - [ ] Update the Sentry frontend SDK - [ ] Configure the staging environment to report errors to the cloud hosted Sentry project (this needs to happen after deployment to staging) - [ ] Test that errors on staging are correctly reported - [ ] Configure the production environment to report errors to the cloud hosted Sentry project (this needs to happen after deployment to staging) - [ ] Test that errors on production are correctly reported # SaaS Sentry upgrade mana page [SaaS Sentry team put together this mana page](https://mana.mozilla.org/wiki/pages/viewpage.action?spaceKey=SRE&title=SaaS+Sentry) to guide teams on how to upgrade their Sentry
True
Transition Sentry from hosted to cloud - foundation.mozilla.org - # Description We want to move the error tracking for the Foundation webstie from the [self-hosted Sentry](https://sentry.prod.mozaws.net) to the [Sentry cloud service](https://sentry.prod.mozaws.net). # Acceptance criteria - [ ] Backend errors are reported to the Sentry cloud version - [ ] Frontend errors are reported to the Sentry cloud version - [ ] Errors from production environment are reported to Sentry and labeled with the correct environment - [ ] Errors from staging environment are reported to Sentry and labeled with the correct environment - [ ] Releases are tagged in Sentry - [ ] Error are reported using the latest version of the Sentry SDK for backend errors - [ ] Error are reported using the latest version of the Sentry SDK for frontend errors - [ ] All devs on the project can view errors on Sentry - [ ] Devs are notified about new and frequent errors # Dev tasks - [ ] Setup a project on the Sentry cloud service - @jzinner will complete this. - [ ] Update the Sentry backend SDK - [ ] Update the Sentry frontend SDK - [ ] Configure the staging environment to report errors to the cloud hosted Sentry project (this needs to happen after deployment to staging) - [ ] Test that errors on staging are correctly reported - [ ] Configure the production environment to report errors to the cloud hosted Sentry project (this needs to happen after deployment to staging) - [ ] Test that errors on production are correctly reported # SaaS Sentry upgrade mana page [SaaS Sentry team put together this mana page](https://mana.mozilla.org/wiki/pages/viewpage.action?spaceKey=SRE&title=SaaS+Sentry) to guide teams on how to upgrade their Sentry
main
transition sentry from hosted to cloud foundation mozilla org description we want to move the error tracking for the foundation webstie from the to the acceptance criteria backend errors are reported to the sentry cloud version frontend errors are reported to the sentry cloud version errors from production environment are reported to sentry and labeled with the correct environment errors from staging environment are reported to sentry and labeled with the correct environment releases are tagged in sentry error are reported using the latest version of the sentry sdk for backend errors error are reported using the latest version of the sentry sdk for frontend errors all devs on the project can view errors on sentry devs are notified about new and frequent errors dev tasks setup a project on the sentry cloud service jzinner will complete this update the sentry backend sdk update the sentry frontend sdk configure the staging environment to report errors to the cloud hosted sentry project this needs to happen after deployment to staging test that errors on staging are correctly reported configure the production environment to report errors to the cloud hosted sentry project this needs to happen after deployment to staging test that errors on production are correctly reported saas sentry upgrade mana page to guide teams on how to upgrade their sentry
1
263,234
28,029,757,545
IssuesEvent
2023-03-28 11:35:49
RG4421/ampere-centos-kernel
https://api.github.com/repos/RG4421/ampere-centos-kernel
reopened
CVE-2021-0512 (High) detected in linuxv5.2
Mend: dependency security vulnerability
## CVE-2021-0512 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary> <p> <p>Linux kernel source tree</p> <p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p> <p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In __hidinput_change_resolution_multipliers of hid-input.c, there is a possible out of bounds write due to a heap buffer overflow. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-173843328References: Upstream kernel <p>Publish Date: 2021-06-21 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-0512>CVE-2021-0512</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://source.android.com/security/bulletin/2021-06-01">https://source.android.com/security/bulletin/2021-06-01</a></p> <p>Release Date: 2021-06-21</p> <p>Fix Resolution: ASB-2021-02-05_mainline</p> </p> </details> <p></p>
True
CVE-2021-0512 (High) detected in linuxv5.2 - ## CVE-2021-0512 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary> <p> <p>Linux kernel source tree</p> <p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p> <p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In __hidinput_change_resolution_multipliers of hid-input.c, there is a possible out of bounds write due to a heap buffer overflow. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android kernelAndroid ID: A-173843328References: Upstream kernel <p>Publish Date: 2021-06-21 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-0512>CVE-2021-0512</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://source.android.com/security/bulletin/2021-06-01">https://source.android.com/security/bulletin/2021-06-01</a></p> <p>Release Date: 2021-06-21</p> <p>Fix Resolution: ASB-2021-02-05_mainline</p> </p> </details> <p></p>
non_main
cve high detected in cve high severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch amp centos kernel vulnerable source files vulnerability details in hidinput change resolution multipliers of hid input c there is a possible out of bounds write due to a heap buffer overflow this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android kernelandroid id a upstream kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution asb mainline
0
3,683
15,037,385,165
IssuesEvent
2021-02-02 16:16:56
IPVS-AS/MBP
https://api.github.com/repos/IPVS-AS/MBP
opened
Investigate strange logs on startup
maintainance
2021-02-02 16:00:38.979 WARN --- [ main] .m.c.i.MongoPersistentEntityIndexCreator : Automatic index creation will be disabled by default as of Spring Data MongoDB 3.x. Please use 'MongoMappingContext#setAutoIndexCreation(boolean)' or override 'MongoConfigurationSupport#autoIndexCreation()' to be explicit. However, we recommend setting up indices manually in an application ready block. You may use index derivation there as well. > ----------------------------------------------------------------------------------------- > @EventListener(ApplicationReadyEvent.class) > public void initIndicesAfterStartup() { > > IndexOperations indexOps = mongoTemplate.indexOps(DomainType.class); > > IndexResolver resolver = new MongoPersistentEntityIndexResolver(mongoMappingContext); > resolver.resolveIndexFor(DomainType.class).forEach(indexOps::ensureIndex); > } > -----------------------------------------------------------------------------------------
True
Investigate strange logs on startup - 2021-02-02 16:00:38.979 WARN --- [ main] .m.c.i.MongoPersistentEntityIndexCreator : Automatic index creation will be disabled by default as of Spring Data MongoDB 3.x. Please use 'MongoMappingContext#setAutoIndexCreation(boolean)' or override 'MongoConfigurationSupport#autoIndexCreation()' to be explicit. However, we recommend setting up indices manually in an application ready block. You may use index derivation there as well. > ----------------------------------------------------------------------------------------- > @EventListener(ApplicationReadyEvent.class) > public void initIndicesAfterStartup() { > > IndexOperations indexOps = mongoTemplate.indexOps(DomainType.class); > > IndexResolver resolver = new MongoPersistentEntityIndexResolver(mongoMappingContext); > resolver.resolveIndexFor(DomainType.class).forEach(indexOps::ensureIndex); > } > -----------------------------------------------------------------------------------------
main
investigate strange logs on startup warn m c i mongopersistententityindexcreator automatic index creation will be disabled by default as of spring data mongodb x please use mongomappingcontext setautoindexcreation boolean or override mongoconfigurationsupport autoindexcreation to be explicit however we recommend setting up indices manually in an application ready block you may use index derivation there as well eventlistener applicationreadyevent class public void initindicesafterstartup indexoperations indexops mongotemplate indexops domaintype class indexresolver resolver new mongopersistententityindexresolver mongomappingcontext resolver resolveindexfor domaintype class foreach indexops ensureindex
1
81,525
10,146,986,122
IssuesEvent
2019-08-05 09:29:05
n-air-app/n-air-app
https://api.github.com/repos/n-air-app/n-air-app
opened
改善要望:ソースの非表示UIがわかりにくい
design work planner request
# 概要 目玉マークに斜線が入るだけでどれが表示状態なのかわかりにくい # スクリーンショット ![image](https://user-images.githubusercontent.com/39181481/62454140-bb88ad80-b7ae-11e9-99f6-ddd49054e647.png) ソースが非アクティブ状態だと余計に認識しづらい
1.0
改善要望:ソースの非表示UIがわかりにくい - # 概要 目玉マークに斜線が入るだけでどれが表示状態なのかわかりにくい # スクリーンショット ![image](https://user-images.githubusercontent.com/39181481/62454140-bb88ad80-b7ae-11e9-99f6-ddd49054e647.png) ソースが非アクティブ状態だと余計に認識しづらい
non_main
改善要望:ソースの非表示uiがわかりにくい 概要 目玉マークに斜線が入るだけでどれが表示状態なのかわかりにくい スクリーンショット ソースが非アクティブ状態だと余計に認識しづらい
0
441,012
30,765,481,322
IssuesEvent
2023-07-30 08:38:30
chesslablab/chess-api
https://api.github.com/repos/chesslablab/chess-api
closed
Write documentation
documentation good first issue
The GET /api/annotations/games endpoint has recently been implemented and now is to be documented at [Chess API](https://chess-api.readthedocs.io/en/latest/). See: - https://github.com/chesslablab/chess-api/issues/51 Thus, a new entry in the [mkdocs.yml](https://github.com/chesslablab/chess-api/blob/main/mkdocs.yml) file needs to be added as well as its corresponding file in the [docs](https://github.com/chesslablab/chess-api/tree/main/docs) folder. Keep it up and happy learning!
1.0
Write documentation - The GET /api/annotations/games endpoint has recently been implemented and now is to be documented at [Chess API](https://chess-api.readthedocs.io/en/latest/). See: - https://github.com/chesslablab/chess-api/issues/51 Thus, a new entry in the [mkdocs.yml](https://github.com/chesslablab/chess-api/blob/main/mkdocs.yml) file needs to be added as well as its corresponding file in the [docs](https://github.com/chesslablab/chess-api/tree/main/docs) folder. Keep it up and happy learning!
non_main
write documentation the get api annotations games endpoint has recently been implemented and now is to be documented at see thus a new entry in the file needs to be added as well as its corresponding file in the folder keep it up and happy learning
0
620,985
19,575,120,983
IssuesEvent
2022-01-04 14:38:56
CDCgov/prime-reportstream
https://api.github.com/repos/CDCgov/prime-reportstream
reopened
IL - OBX-23.1 Org Name truncate to 50 chars
onboarding-ops blocked receiver High Priority support
Illinois is having a problem with the length of the organization name in OBX 23.1. The maximum character sizing limit for this subfield is 50, and quite often more than 50 characters are being sent. ![image.png](https://images.zenhubusercontent.com/60d1eba7caf0775fd79203f8/19c24e5d-a744-461b-917a-7df14402fc25)
1.0
IL - OBX-23.1 Org Name truncate to 50 chars - Illinois is having a problem with the length of the organization name in OBX 23.1. The maximum character sizing limit for this subfield is 50, and quite often more than 50 characters are being sent. ![image.png](https://images.zenhubusercontent.com/60d1eba7caf0775fd79203f8/19c24e5d-a744-461b-917a-7df14402fc25)
non_main
il obx org name truncate to chars illinois is having a problem with the length of the organization name in obx the maximum character sizing limit for this subfield is and quite often more than characters are being sent
0
251,380
21,476,451,258
IssuesEvent
2022-04-26 14:02:26
ukaea/Indica
https://api.github.com/repos/ukaea/Indica
opened
Tests exceeding deadlines locally
bug testing
On my local machine certain tests seem to exceed the deadline and have variable run times. We should either increase the deadlines or work out the cause of the variability where appropriate. Examples and run-times include: `test_convert_coords_cache` 300ms `test_push_pop_agents` 831ms `test_convert_to_Rz` 830ms `test_assign_multiple_provenance` 952ms `test_get_sxr` 995ms `test_cache_read_write` 920ms
1.0
Tests exceeding deadlines locally - On my local machine certain tests seem to exceed the deadline and have variable run times. We should either increase the deadlines or work out the cause of the variability where appropriate. Examples and run-times include: `test_convert_coords_cache` 300ms `test_push_pop_agents` 831ms `test_convert_to_Rz` 830ms `test_assign_multiple_provenance` 952ms `test_get_sxr` 995ms `test_cache_read_write` 920ms
non_main
tests exceeding deadlines locally on my local machine certain tests seem to exceed the deadline and have variable run times we should either increase the deadlines or work out the cause of the variability where appropriate examples and run times include test convert coords cache test push pop agents test convert to rz test assign multiple provenance test get sxr test cache read write
0
5,149
26,248,994,200
IssuesEvent
2023-01-05 17:30:55
libp2p/js-libp2p-relay-server
https://api.github.com/repos/libp2p/js-libp2p-relay-server
closed
Docker Hub automated builds
kind/bug need/maintainer-input
There is still an issue with the docker hub automated builds. I triggered a new release to see if the build worked after the docker hub integration. However, I got an error: ``` Cloning into '.'... Warning: Permanently added the RSA host key for IP address '140.82.113.3' to the list of known hosts. Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. please ensure the correct public key is added to the list of trusted keys for this repository (128) ``` It seems that the docker integration in github needs to be configured, but I cannot access the repo settings. @jacobheun can you have a look?
True
Docker Hub automated builds - There is still an issue with the docker hub automated builds. I triggered a new release to see if the build worked after the docker hub integration. However, I got an error: ``` Cloning into '.'... Warning: Permanently added the RSA host key for IP address '140.82.113.3' to the list of known hosts. Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. please ensure the correct public key is added to the list of trusted keys for this repository (128) ``` It seems that the docker integration in github needs to be configured, but I cannot access the repo settings. @jacobheun can you have a look?
main
docker hub automated builds there is still an issue with the docker hub automated builds i triggered a new release to see if the build worked after the docker hub integration however i got an error cloning into warning permanently added the rsa host key for ip address to the list of known hosts permission denied publickey fatal could not read from remote repository please make sure you have the correct access rights and the repository exists please ensure the correct public key is added to the list of trusted keys for this repository it seems that the docker integration in github needs to be configured but i cannot access the repo settings jacobheun can you have a look
1
71,136
23,464,715,400
IssuesEvent
2022-08-16 15:44:53
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
New layout: bottom sheet isn't full
T-Defect Team: Delight
### Steps to reproduce 1. Click the spaces fab bottom right 2. Bottom sheet is sometimes very small, only showing ""all chats"" and 1 other space sometimes ### Outcome #### What did you expect? Can see a big bottom sheet with lots of spaces visible at the same time #### What happened instead? ![Screenshot_20220816-153342](https://user-images.githubusercontent.com/51663/184922042-0e3170ce-50a7-4dcc-908d-b2d86a5511dc.png) ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store https://buildkite.com/matrix-dot-org/element-android/builds/10430 ### Homeserver _No response_ ### Will you send logs? No ### Are you willing to provide a PR? No
1.0
New layout: bottom sheet isn't full - ### Steps to reproduce 1. Click the spaces fab bottom right 2. Bottom sheet is sometimes very small, only showing ""all chats"" and 1 other space sometimes ### Outcome #### What did you expect? Can see a big bottom sheet with lots of spaces visible at the same time #### What happened instead? ![Screenshot_20220816-153342](https://user-images.githubusercontent.com/51663/184922042-0e3170ce-50a7-4dcc-908d-b2d86a5511dc.png) ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store https://buildkite.com/matrix-dot-org/element-android/builds/10430 ### Homeserver _No response_ ### Will you send logs? No ### Are you willing to provide a PR? No
non_main
new layout bottom sheet isn t full steps to reproduce click the spaces fab bottom right bottom sheet is sometimes very small only showing all chats and other space sometimes outcome what did you expect can see a big bottom sheet with lots of spaces visible at the same time what happened instead your phone model no response operating system version no response application version and app store homeserver no response will you send logs no are you willing to provide a pr no
0
5,723
30,258,692,306
IssuesEvent
2023-07-07 06:19:09
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
opened
IntelliJ Plugin Aspect tests are failing with Bazel@HEAD on CI
type: bug product: IntelliJ topic: bazel awaiting-maintainer
https://buildkite.com/bazel/bazel-at-head-plus-downstream/builds/3149#01892e1b-b74b-41a1-834a-6a032d8314fe Platform : Ubuntu Aspect tests 18.04, 20.04 Logs : ``` //aspect/testing/tests/src/com/google/idea/blaze/aspect/java/javabinary:JavaBinaryTest FAILED //aspect/testing/tests/src/com/google/idea/blaze/aspect/java/javatest:JavaTestTest FAILED ``` Steps : ``` git clone -v https://github.com/bazelbuild/intellij.git git reset 023a786e711d47d1ce873ed0ad53b4ff25089ad5 --hard export USE_BAZEL_VERSION=fcfefc15b17dd70ab249e3d8d09d1ccc5da7d347 bazel test --define=ij_product=intellij-latest --test_output=errors --notrim_test_configuration -- //aspect/testing/... ``` CC Green team @Wyverald
True
IntelliJ Plugin Aspect tests are failing with Bazel@HEAD on CI - https://buildkite.com/bazel/bazel-at-head-plus-downstream/builds/3149#01892e1b-b74b-41a1-834a-6a032d8314fe Platform : Ubuntu Aspect tests 18.04, 20.04 Logs : ``` //aspect/testing/tests/src/com/google/idea/blaze/aspect/java/javabinary:JavaBinaryTest FAILED //aspect/testing/tests/src/com/google/idea/blaze/aspect/java/javatest:JavaTestTest FAILED ``` Steps : ``` git clone -v https://github.com/bazelbuild/intellij.git git reset 023a786e711d47d1ce873ed0ad53b4ff25089ad5 --hard export USE_BAZEL_VERSION=fcfefc15b17dd70ab249e3d8d09d1ccc5da7d347 bazel test --define=ij_product=intellij-latest --test_output=errors --notrim_test_configuration -- //aspect/testing/... ``` CC Green team @Wyverald
main
intellij plugin aspect tests are failing with bazel head on ci platform ubuntu aspect tests logs aspect testing tests src com google idea blaze aspect java javabinary javabinarytest failed aspect testing tests src com google idea blaze aspect java javatest javatesttest failed steps git clone v git reset hard export use bazel version bazel test define ij product intellij latest test output errors notrim test configuration aspect testing cc green team wyverald
1
593
4,087,800,604
IssuesEvent
2016-06-01 11:31:54
Particular/NServiceBus
https://api.github.com/repos/Particular/NServiceBus
closed
SetMessageHeader Obsolete message is not very clear on what needs to be done.
Project: V6 Launch Size: S State: In Progress - Maintainer Prio Tag: Maintainer Prio Type: Refactoring
The current obsolete message is here: https://github.com/Particular/NServiceBus/blob/7df2cc7f289629072b1a13c8adead40efe0eb180/src/NServiceBus.Core/obsoletes.cs#L689 The upgrade guide section documents this behavior: http://docs.particular.net/nservicebus/upgrades/5to6#header-management-setting-headers-on-outgoing-messages Talks about adding header to all outgoing messages, but doesn't highlight how to set a specific header. Relates to: https://github.com/Particular/V6Launch/issues/35 <!-- https://github.com/Particular/PlatformDevelopment/issues/734 -->
True
SetMessageHeader Obsolete message is not very clear on what needs to be done. - The current obsolete message is here: https://github.com/Particular/NServiceBus/blob/7df2cc7f289629072b1a13c8adead40efe0eb180/src/NServiceBus.Core/obsoletes.cs#L689 The upgrade guide section documents this behavior: http://docs.particular.net/nservicebus/upgrades/5to6#header-management-setting-headers-on-outgoing-messages Talks about adding header to all outgoing messages, but doesn't highlight how to set a specific header. Relates to: https://github.com/Particular/V6Launch/issues/35 <!-- https://github.com/Particular/PlatformDevelopment/issues/734 -->
main
setmessageheader obsolete message is not very clear on what needs to be done the current obsolete message is here the upgrade guide section documents this behavior talks about adding header to all outgoing messages but doesn t highlight how to set a specific header relates to
1
662,377
22,136,624,945
IssuesEvent
2022-06-03 00:20:42
mlflow/mlflow
https://api.github.com/repos/mlflow/mlflow
closed
[BUG]: parameter passing integers when expecting floats to mLflow run throws an Exception
bug area/tracking priority/important-longterm
Thank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md) for additional information about bug reports. For help with debugging your code, please refer to [Stack Overflow](https://stackoverflow.com/questions/tagged/mlflow). **Please fill in this bug report template to ensure a timely and thorough response.** ### Willingness to contribute The MLflow Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the MLflow code base? - [ ] Yes. I can contribute a fix for this bug independently. - [X] Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community. - [ ] No. I cannot contribute a bug fix at this time. ### System information - **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: - **MLflow installed from (source or binary)**: - **MLflow version (run ``mlflow --version``)**: - **Python version**: - **npm version, if running the dev UI**: - **Exact command to reproduce**: ### Describe the problem Describe the problem clearly here. Include descriptions of the expected behavior and the actual behavior. The command: ``` mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=5 ``` Throws an exception ### Code to reproduce issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. ``` mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=5 ``` ### Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. ``` Traceback (most recent call last): File "train.py", line 60, in <module> mlflow.log_param("alpha", alpha) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/tracking/fluent.py", line 214, in log_param MlflowClient().log_param(run_id, key, value) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/tracking/client.py", line 206, in log_param self._tracking_client.log_param(run_id, key, value) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/tracking/_tracking_service/client.py", line 179, in log_param self.store.log_param(run_id, param) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/store/tracking/file_store.py", line 663, in log_param self._log_run_param(run_info, param) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/store/tracking/file_store.py", line 671, in _log_run_param run_id=run_info.run_id, new_value=writeable_param_value) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/store/tracking/file_store.py", line 690, in _validate_new_param_value databricks_pb2.INVALID_PARAMETER_VALUE) mlflow.exceptions.MlflowException: Changing param values is not allowed. Param with key='alpha' was already logged with value='5' for run ID='f829e9d818904139968a4b39baadab5a'. Attempted logging new value '5.0'. 2020/05/12 20:39:38 ERROR mlflow.cli: === Run (ID 'f829e9d818904139968a4b39baadab5a') failed === /4.1s ``` ### What component(s), interfaces, languages, and integrations does this bug affect? Components - [ ] `area/artifacts`: Artifact stores and artifact logging - [ ] `area/build`: Build and test infrastructure for MLflow - [ ] `area/docs`: MLflow documentation pages - [ ] `area/examples`: Example code - [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry - [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors - [ ] `area/projects`: MLproject format, project running backends - [ ] `area/scoring`: Local serving, model deployment tools, spark UDFs - [X] `area/tracking`: Tracking Service, tracking client APIs, autologging Interface - [ ] `area/uiux`: Front-end, user experience, JavaScript, plotting - [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models - [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry - [ ] `area/windows`: Windows support Language - [ ] `language/r`: R APIs and clients - [ ] `language/java`: Java APIs and clients Integrations - [ ] `integrations/azure`: Azure and Azure ML integrations - [ ] `integrations/sagemaker`: SageMaker integrations
1.0
[BUG]: parameter passing integers when expecting floats to mLflow run throws an Exception - Thank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md) for additional information about bug reports. For help with debugging your code, please refer to [Stack Overflow](https://stackoverflow.com/questions/tagged/mlflow). **Please fill in this bug report template to ensure a timely and thorough response.** ### Willingness to contribute The MLflow Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the MLflow code base? - [ ] Yes. I can contribute a fix for this bug independently. - [X] Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community. - [ ] No. I cannot contribute a bug fix at this time. ### System information - **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: - **MLflow installed from (source or binary)**: - **MLflow version (run ``mlflow --version``)**: - **Python version**: - **npm version, if running the dev UI**: - **Exact command to reproduce**: ### Describe the problem Describe the problem clearly here. Include descriptions of the expected behavior and the actual behavior. The command: ``` mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=5 ``` Throws an exception ### Code to reproduce issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. ``` mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=5 ``` ### Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. ``` Traceback (most recent call last): File "train.py", line 60, in <module> mlflow.log_param("alpha", alpha) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/tracking/fluent.py", line 214, in log_param MlflowClient().log_param(run_id, key, value) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/tracking/client.py", line 206, in log_param self._tracking_client.log_param(run_id, key, value) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/tracking/_tracking_service/client.py", line 179, in log_param self.store.log_param(run_id, param) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/store/tracking/file_store.py", line 663, in log_param self._log_run_param(run_info, param) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/store/tracking/file_store.py", line 671, in _log_run_param run_id=run_info.run_id, new_value=writeable_param_value) File "/opt/anaconda3/envs/mlflow-3eee9bd7a0713cf80a17bc0a4d659bc9c549efac/lib/python3.6/site-packages/mlflow/store/tracking/file_store.py", line 690, in _validate_new_param_value databricks_pb2.INVALID_PARAMETER_VALUE) mlflow.exceptions.MlflowException: Changing param values is not allowed. Param with key='alpha' was already logged with value='5' for run ID='f829e9d818904139968a4b39baadab5a'. Attempted logging new value '5.0'. 2020/05/12 20:39:38 ERROR mlflow.cli: === Run (ID 'f829e9d818904139968a4b39baadab5a') failed === /4.1s ``` ### What component(s), interfaces, languages, and integrations does this bug affect? Components - [ ] `area/artifacts`: Artifact stores and artifact logging - [ ] `area/build`: Build and test infrastructure for MLflow - [ ] `area/docs`: MLflow documentation pages - [ ] `area/examples`: Example code - [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry - [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors - [ ] `area/projects`: MLproject format, project running backends - [ ] `area/scoring`: Local serving, model deployment tools, spark UDFs - [X] `area/tracking`: Tracking Service, tracking client APIs, autologging Interface - [ ] `area/uiux`: Front-end, user experience, JavaScript, plotting - [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models - [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry - [ ] `area/windows`: Windows support Language - [ ] `language/r`: R APIs and clients - [ ] `language/java`: Java APIs and clients Integrations - [ ] `integrations/azure`: Azure and Azure ML integrations - [ ] `integrations/sagemaker`: SageMaker integrations
non_main
parameter passing integers when expecting floats to mlflow run throws an exception thank you for submitting an issue please refer to our for additional information about bug reports for help with debugging your code please refer to please fill in this bug report template to ensure a timely and thorough response willingness to contribute the mlflow community encourages bug fix contributions would you or another member of your organization be willing to contribute a fix for this bug to the mlflow code base yes i can contribute a fix for this bug independently yes i would be willing to contribute a fix for this bug with guidance from the mlflow community no i cannot contribute a bug fix at this time system information have i written custom code as opposed to using a stock example script provided in mlflow os platform and distribution e g linux ubuntu mlflow installed from source or binary mlflow version run mlflow version python version npm version if running the dev ui exact command to reproduce describe the problem describe the problem clearly here include descriptions of the expected behavior and the actual behavior the command mlflow run p alpha throws an exception code to reproduce issue provide a reproducible test case that is the bare minimum necessary to generate the problem mlflow run p alpha other info logs include any logs or source code that would be helpful to diagnose the problem if including tracebacks please include the full traceback large logs and files should be attached traceback most recent call last file train py line in mlflow log param alpha alpha file opt envs mlflow lib site packages mlflow tracking fluent py line in log param mlflowclient log param run id key value file opt envs mlflow lib site packages mlflow tracking client py line in log param self tracking client log param run id key value file opt envs mlflow lib site packages mlflow tracking tracking service client py line in log param self store log param run id param file opt envs mlflow lib site packages mlflow store tracking file store py line in log param self log run param run info param file opt envs mlflow lib site packages mlflow store tracking file store py line in log run param run id run info run id new value writeable param value file opt envs mlflow lib site packages mlflow store tracking file store py line in validate new param value databricks invalid parameter value mlflow exceptions mlflowexception changing param values is not allowed param with key alpha was already logged with value for run id attempted logging new value error mlflow cli run id failed what component s interfaces languages and integrations does this bug affect components area artifacts artifact stores and artifact logging area build build and test infrastructure for mlflow area docs mlflow documentation pages area examples example code area model registry model registry service apis and the fluent client calls for model registry area models mlmodel format model serialization deserialization flavors area projects mlproject format project running backends area scoring local serving model deployment tools spark udfs area tracking tracking service tracking client apis autologging interface area uiux front end user experience javascript plotting area docker docker use across mlflow s components such as mlflow projects and mlflow models area sqlalchemy use of sqlalchemy in the tracking service or model registry area windows windows support language language r r apis and clients language java java apis and clients integrations integrations azure azure and azure ml integrations integrations sagemaker sagemaker integrations
0
262,007
27,840,667,496
IssuesEvent
2023-03-20 12:33:13
jaisree-subramanian/sca-dotnetcore-microservices
https://api.github.com/repos/jaisree-subramanian/sca-dotnetcore-microservices
opened
vue-resource-1.5.1.tgz: 3 vulnerabilities (highest severity is: 7.5)
Mend: dependency security vulnerability mend vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>vue-resource-1.5.1.tgz</b></p></summary> <p></p> <p>Path to dependency file: /Web/package.json</p> <p>Path to vulnerable library: /Web/node_modules/got/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/jaisree-subramanian/sca-dotnetcore-microservices/commit/f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8">f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (vue-resource version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-38900](https://www.mend.io/vulnerability-database/CVE-2022-38900) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | decode-uri-component-0.2.0.tgz | Transitive | 1.5.2 | &#9989; | | [CVE-2022-25881](https://www.mend.io/vulnerability-database/CVE-2022-25881) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | http-cache-semantics-3.8.1.tgz | Transitive | N/A* | &#10060; | | [CVE-2022-33987](https://www.mend.io/vulnerability-database/CVE-2022-33987) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | got-8.3.2.tgz | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-38900</summary> ### Vulnerable Library - <b>decode-uri-component-0.2.0.tgz</b></p> <p>A better decodeURIComponent</p> <p>Library home page: <a href="https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz">https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz</a></p> <p>Path to dependency file: /Web/package.json</p> <p>Path to vulnerable library: /Web/node_modules/decode-uri-component/package.json</p> <p> Dependency Hierarchy: - vue-resource-1.5.1.tgz (Root Library) - got-8.3.2.tgz - cacheable-request-2.1.4.tgz - normalize-url-2.0.1.tgz - query-string-5.1.1.tgz - :x: **decode-uri-component-0.2.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jaisree-subramanian/sca-dotnetcore-microservices/commit/f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8">f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> decode-uri-component 0.2.0 is vulnerable to Improper Input Validation resulting in DoS. <p>Publish Date: 2022-11-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-38900>CVE-2022-38900</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-w573-4hg7-7wgq">https://github.com/advisories/GHSA-w573-4hg7-7wgq</a></p> <p>Release Date: 2022-11-28</p> <p>Fix Resolution (decode-uri-component): 0.2.1</p> <p>Direct dependency fix Resolution (vue-resource): 1.5.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-25881</summary> ### Vulnerable Library - <b>http-cache-semantics-3.8.1.tgz</b></p> <p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p> <p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p> <p>Path to dependency file: /Web/package.json</p> <p>Path to vulnerable library: /Web/node_modules/http-cache-semantics/package.json</p> <p> Dependency Hierarchy: - vue-resource-1.5.1.tgz (Root Library) - got-8.3.2.tgz - cacheable-request-2.1.4.tgz - :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jaisree-subramanian/sca-dotnetcore-microservices/commit/f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8">f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library. <p>Publish Date: 2023-01-31 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p> <p>Release Date: 2023-01-31</p> <p>Fix Resolution: http-cache-semantics - 4.1.1</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-33987</summary> ### Vulnerable Library - <b>got-8.3.2.tgz</b></p> <p>Simplified HTTP requests</p> <p>Library home page: <a href="https://registry.npmjs.org/got/-/got-8.3.2.tgz">https://registry.npmjs.org/got/-/got-8.3.2.tgz</a></p> <p>Path to dependency file: /Web/package.json</p> <p>Path to vulnerable library: /Web/node_modules/got/package.json</p> <p> Dependency Hierarchy: - vue-resource-1.5.1.tgz (Root Library) - :x: **got-8.3.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jaisree-subramanian/sca-dotnetcore-microservices/commit/f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8">f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The got package before 12.1.0 (also fixed in 11.8.5) for Node.js allows a redirect to a UNIX socket. <p>Publish Date: 2022-06-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-33987>CVE-2022-33987</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33987">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33987</a></p> <p>Release Date: 2022-06-18</p> <p>Fix Resolution: got - 11.8.5,12.1.0</p> </p> <p></p> </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
True
vue-resource-1.5.1.tgz: 3 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>vue-resource-1.5.1.tgz</b></p></summary> <p></p> <p>Path to dependency file: /Web/package.json</p> <p>Path to vulnerable library: /Web/node_modules/got/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/jaisree-subramanian/sca-dotnetcore-microservices/commit/f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8">f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (vue-resource version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-38900](https://www.mend.io/vulnerability-database/CVE-2022-38900) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | decode-uri-component-0.2.0.tgz | Transitive | 1.5.2 | &#9989; | | [CVE-2022-25881](https://www.mend.io/vulnerability-database/CVE-2022-25881) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | http-cache-semantics-3.8.1.tgz | Transitive | N/A* | &#10060; | | [CVE-2022-33987](https://www.mend.io/vulnerability-database/CVE-2022-33987) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | got-8.3.2.tgz | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-38900</summary> ### Vulnerable Library - <b>decode-uri-component-0.2.0.tgz</b></p> <p>A better decodeURIComponent</p> <p>Library home page: <a href="https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz">https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz</a></p> <p>Path to dependency file: /Web/package.json</p> <p>Path to vulnerable library: /Web/node_modules/decode-uri-component/package.json</p> <p> Dependency Hierarchy: - vue-resource-1.5.1.tgz (Root Library) - got-8.3.2.tgz - cacheable-request-2.1.4.tgz - normalize-url-2.0.1.tgz - query-string-5.1.1.tgz - :x: **decode-uri-component-0.2.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jaisree-subramanian/sca-dotnetcore-microservices/commit/f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8">f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> decode-uri-component 0.2.0 is vulnerable to Improper Input Validation resulting in DoS. <p>Publish Date: 2022-11-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-38900>CVE-2022-38900</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-w573-4hg7-7wgq">https://github.com/advisories/GHSA-w573-4hg7-7wgq</a></p> <p>Release Date: 2022-11-28</p> <p>Fix Resolution (decode-uri-component): 0.2.1</p> <p>Direct dependency fix Resolution (vue-resource): 1.5.2</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-25881</summary> ### Vulnerable Library - <b>http-cache-semantics-3.8.1.tgz</b></p> <p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p> <p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p> <p>Path to dependency file: /Web/package.json</p> <p>Path to vulnerable library: /Web/node_modules/http-cache-semantics/package.json</p> <p> Dependency Hierarchy: - vue-resource-1.5.1.tgz (Root Library) - got-8.3.2.tgz - cacheable-request-2.1.4.tgz - :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jaisree-subramanian/sca-dotnetcore-microservices/commit/f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8">f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library. <p>Publish Date: 2023-01-31 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p> <p>Release Date: 2023-01-31</p> <p>Fix Resolution: http-cache-semantics - 4.1.1</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-33987</summary> ### Vulnerable Library - <b>got-8.3.2.tgz</b></p> <p>Simplified HTTP requests</p> <p>Library home page: <a href="https://registry.npmjs.org/got/-/got-8.3.2.tgz">https://registry.npmjs.org/got/-/got-8.3.2.tgz</a></p> <p>Path to dependency file: /Web/package.json</p> <p>Path to vulnerable library: /Web/node_modules/got/package.json</p> <p> Dependency Hierarchy: - vue-resource-1.5.1.tgz (Root Library) - :x: **got-8.3.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jaisree-subramanian/sca-dotnetcore-microservices/commit/f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8">f4bc90d7e1bcb188b059df3aafe4cd17f09a86d8</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The got package before 12.1.0 (also fixed in 11.8.5) for Node.js allows a redirect to a UNIX socket. <p>Publish Date: 2022-06-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-33987>CVE-2022-33987</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33987">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33987</a></p> <p>Release Date: 2022-06-18</p> <p>Fix Resolution: got - 11.8.5,12.1.0</p> </p> <p></p> </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
non_main
vue resource tgz vulnerabilities highest severity is vulnerable library vue resource tgz path to dependency file web package json path to vulnerable library web node modules got package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in vue resource version remediation available high decode uri component tgz transitive high http cache semantics tgz transitive n a medium got tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library decode uri component tgz a better decodeuricomponent library home page a href path to dependency file web package json path to vulnerable library web node modules decode uri component package json dependency hierarchy vue resource tgz root library got tgz cacheable request tgz normalize url tgz query string tgz x decode uri component tgz vulnerable library found in head commit a href found in base branch master vulnerability details decode uri component is vulnerable to improper input validation resulting in dos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution decode uri component direct dependency fix resolution vue resource rescue worker helmet automatic remediation is available for this issue cve vulnerable library http cache semantics tgz parses cache control and other headers helps building correct http caches and proxies library home page a href path to dependency file web package json path to vulnerable library web node modules http cache semantics package json dependency hierarchy vue resource tgz root library got tgz cacheable request tgz x http cache semantics tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects versions of the package http cache semantics before the issue can be exploited via malicious request header values sent to a server when that server reads the cache policy from the request using this library publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http cache semantics cve vulnerable library got tgz simplified http requests library home page a href path to dependency file web package json path to vulnerable library web node modules got package json dependency hierarchy vue resource tgz root library x got tgz vulnerable library found in head commit a href found in base branch master vulnerability details the got package before also fixed in for node js allows a redirect to a unix socket publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution got rescue worker helmet automatic remediation is available for this issue
0
629
4,146,947,304
IssuesEvent
2016-06-15 03:29:09
Microsoft/DirectXMesh
https://api.github.com/repos/Microsoft/DirectXMesh
opened
Remove VS 2012 adapter code
maintainence
As part of dropping VS 2012 projects, can clean up the following code: •Remove C4005 disable for stdint.h (workaround for bug with VS 2010 + Windows 7 SDK) •Remove C4481 disable for "override is an extension" (workaround for VS 2010 bug) •Remove DIRECTX_STD_CALLCONV std::function workaround for VS 2012 •Remove DIRECTX_CTOR_DEFAULT / DIRECTX_CTOR_DELETE macros and just use =default, =delete directly (VS 2013 or later supports this) •Remove DirectXMath 3.03 adapters for 3.06 constructs (workaround for Windows 8.0 SDK) •Make use of std::make_unique<> (C++14 draft feature supported in VS 2013) •Remove some guarded code patterns for Windows XP (i.e. functions that were added to Windows Vista) •Make consistent use of = {} to initialize memory to zero (C++11 brace init behavior fixed in VS 2013) •Remove legacy WCHAR Win32 type and use wchar_t
True
Remove VS 2012 adapter code - As part of dropping VS 2012 projects, can clean up the following code: •Remove C4005 disable for stdint.h (workaround for bug with VS 2010 + Windows 7 SDK) •Remove C4481 disable for "override is an extension" (workaround for VS 2010 bug) •Remove DIRECTX_STD_CALLCONV std::function workaround for VS 2012 •Remove DIRECTX_CTOR_DEFAULT / DIRECTX_CTOR_DELETE macros and just use =default, =delete directly (VS 2013 or later supports this) •Remove DirectXMath 3.03 adapters for 3.06 constructs (workaround for Windows 8.0 SDK) •Make use of std::make_unique<> (C++14 draft feature supported in VS 2013) •Remove some guarded code patterns for Windows XP (i.e. functions that were added to Windows Vista) •Make consistent use of = {} to initialize memory to zero (C++11 brace init behavior fixed in VS 2013) •Remove legacy WCHAR Win32 type and use wchar_t
main
remove vs adapter code as part of dropping vs projects can clean up the following code •remove disable for stdint h workaround for bug with vs windows sdk •remove disable for override is an extension workaround for vs bug •remove directx std callconv std function workaround for vs •remove directx ctor default directx ctor delete macros and just use default delete directly vs or later supports this •remove directxmath adapters for constructs workaround for windows sdk •make use of std make unique c draft feature supported in vs •remove some guarded code patterns for windows xp i e functions that were added to windows vista •make consistent use of to initialize memory to zero c brace init behavior fixed in vs •remove legacy wchar type and use wchar t
1
4,437
23,053,602,396
IssuesEvent
2022-07-25 00:33:07
danbugs/smithereens
https://api.github.com/repos/danbugs/smithereens
closed
improve README
management core maintainer issue
add clarification on: (1) what the binary can currently do, and (2) the fact that a website will exist in the future. aside from that, add a "getting started" section.
True
improve README - add clarification on: (1) what the binary can currently do, and (2) the fact that a website will exist in the future. aside from that, add a "getting started" section.
main
improve readme add clarification on what the binary can currently do and the fact that a website will exist in the future aside from that add a getting started section
1
3,366
13,039,551,152
IssuesEvent
2020-07-28 16:56:46
sambhav228/Stulysis
https://api.github.com/repos/sambhav228/Stulysis
closed
Finalize the workflow of the project
For Maintainers Urgent Attention
### We have to finalize the workflow/flowchart decided earlier. Report it at the earliest. Only after that we can proceed further. This is currently assigned to the project admin @sambhav228
True
Finalize the workflow of the project - ### We have to finalize the workflow/flowchart decided earlier. Report it at the earliest. Only after that we can proceed further. This is currently assigned to the project admin @sambhav228
main
finalize the workflow of the project we have to finalize the workflow flowchart decided earlier report it at the earliest only after that we can proceed further this is currently assigned to the project admin
1
643
4,158,716,126
IssuesEvent
2016-06-17 04:57:40
coniks-sys/coniks-ref-implementation
https://api.github.com/repos/coniks-sys/coniks-ref-implementation
closed
Change protos to use bytes instead of repeated ints
data format maintainability
Repeated ints in protos are compiled into ArrayLists of ints, so the current server and client demo implementations have utility functions to convert between byte[] and these ArrayLists. Instead ByteStrings should be used where the protos are used in the server and client.
True
Change protos to use bytes instead of repeated ints - Repeated ints in protos are compiled into ArrayLists of ints, so the current server and client demo implementations have utility functions to convert between byte[] and these ArrayLists. Instead ByteStrings should be used where the protos are used in the server and client.
main
change protos to use bytes instead of repeated ints repeated ints in protos are compiled into arraylists of ints so the current server and client demo implementations have utility functions to convert between byte and these arraylists instead bytestrings should be used where the protos are used in the server and client
1
88,208
10,566,599,556
IssuesEvent
2019-10-05 19:58:40
palantir/tslint
https://api.github.com/repos/palantir/tslint
closed
[Question] Import-blacklist by regex does not work as expected
Difficulty: Easy Domain: Documentation Status: Accepting PRs Type: Bug good first issue
### Question - __TSLint version__: 5.15.0 - __TypeScript version__: 3.4.3 - __Running TSLint via__: CLI / VSCode #### TypeScript code being linted ```ts import { SecondaryService } from "./src"; export class BasicService { public State: boolean; constructor(public s: SecondaryService) { } } ``` with `tslint.json` configuration: ``` { "defaultSeverity": "error", "extends": [ "tslint:recommended" ], "jsRules": {}, "rules": { "import-blacklist": [ true, "^.*src$", ] }, "rulesDirectory": [] } ``` #### Actual behavior Tslint does not emit any errors #### Expected behavior string `import { SecondaryService } from "./src";` should mark as wrong by import-blacklist regex rule #### Mini-repo https://github.com/alxpsr/tslint-regex #### Questions - What im doing wrong? - How can i disallow importing from `import-blacklist` regex patterns?
1.0
[Question] Import-blacklist by regex does not work as expected - ### Question - __TSLint version__: 5.15.0 - __TypeScript version__: 3.4.3 - __Running TSLint via__: CLI / VSCode #### TypeScript code being linted ```ts import { SecondaryService } from "./src"; export class BasicService { public State: boolean; constructor(public s: SecondaryService) { } } ``` with `tslint.json` configuration: ``` { "defaultSeverity": "error", "extends": [ "tslint:recommended" ], "jsRules": {}, "rules": { "import-blacklist": [ true, "^.*src$", ] }, "rulesDirectory": [] } ``` #### Actual behavior Tslint does not emit any errors #### Expected behavior string `import { SecondaryService } from "./src";` should mark as wrong by import-blacklist regex rule #### Mini-repo https://github.com/alxpsr/tslint-regex #### Questions - What im doing wrong? - How can i disallow importing from `import-blacklist` regex patterns?
non_main
import blacklist by regex does not work as expected question tslint version typescript version running tslint via cli vscode typescript code being linted ts import secondaryservice from src export class basicservice public state boolean constructor public s secondaryservice with tslint json configuration defaultseverity error extends tslint recommended jsrules rules import blacklist true src rulesdirectory actual behavior tslint does not emit any errors expected behavior string import secondaryservice from src should mark as wrong by import blacklist regex rule mini repo questions what im doing wrong how can i disallow importing from import blacklist regex patterns
0
4,196
20,592,004,241
IssuesEvent
2022-03-05 00:46:03
HPCL/code-analysis
https://api.github.com/repos/HPCL/code-analysis
closed
CWE-478 Missing Default Case in Switch Statement
CLAIMED ISO/IEC 5055:2021 SwitchStatement WEAKNESS CATEGORY: MAINTAINABILITY
**Reference ** [https://cwe.mitre.org/data/definitions/478](https://cwe.mitre.org/data/definitions/478) **Roles** - the *SwitchStatement* **Detection Patterns** - 8.2.135 ASCQM Use Default Case in Switch Statement
True
CWE-478 Missing Default Case in Switch Statement - **Reference ** [https://cwe.mitre.org/data/definitions/478](https://cwe.mitre.org/data/definitions/478) **Roles** - the *SwitchStatement* **Detection Patterns** - 8.2.135 ASCQM Use Default Case in Switch Statement
main
cwe missing default case in switch statement reference roles the switchstatement detection patterns ascqm use default case in switch statement
1
337,489
30,248,545,850
IssuesEvent
2023-07-06 18:29:20
NVIDIA/spark-rapids
https://api.github.com/repos/NVIDIA/spark-rapids
opened
[TEST] Compatibility tests for data formats
test task
Here is a list of tests to confirm data format compatibility with Apache Spark, in the Spark RAPIDS plugin. This list is a work in progress: - [ ] Test SPARK-10177](https://issues.apache.org/jira/browse/SPARK-10177): Timestamps read incorrectly. Refer to the test in `ParquetHiveCompatibility.scala#L97` - [ ] Test Parquet reads with `CONVERT_METASTORE_PARQUET_WITH_SCHEMA_MERGING=true` - [ ] Test Parquet reads for `LIST<STRUCT<int, string>>`: (Refer to https://github.com/rapidsai/cudf/issues/13664) - [ ] Test [SPARK-16344](https://issues.apache.org/jira/browse/SPARK-16344): `ARRAY<STRUCT<array_element: INT>>`. Refer to test in `ParquetHiveCompatibility.scala#139` - [ ] Test CPU fallback for user-defined types in ORC read/write: OrcQuerySuite.scala#L108 - [ ] Test ORC reads at scale with all null values: Like OrcQuerySuite.scala#L173, but with large number of rows. - [ ] Test [SPARK-16610](https://issues.apache.org/jira/browse/SPARK-16610): Honour `orc.compress` on writes, when `compress` is unset: OrcQuerySuite.scala#L189. - [ ] Test that `compress` is honoured when set (ZLIB, Snappy, None). OrcQuerySuite.scala#L224 - [ ] Test [SPARK-5309](https://issues.apache.org/jira/browse/SPARK-5309): ORC STRING column uses dictionary compression. OrcQuerySuite.scala#359 (Note: Not sure how this test is verifying dictionary_v2). - [ ] Test [SPARK-9170](https://issues.apache.org/jira/browse/SPARK-9170): Upper case ORC column names are not implicitly stored in lowercase. (OrcQuerySuite.scala:371)
1.0
[TEST] Compatibility tests for data formats - Here is a list of tests to confirm data format compatibility with Apache Spark, in the Spark RAPIDS plugin. This list is a work in progress: - [ ] Test SPARK-10177](https://issues.apache.org/jira/browse/SPARK-10177): Timestamps read incorrectly. Refer to the test in `ParquetHiveCompatibility.scala#L97` - [ ] Test Parquet reads with `CONVERT_METASTORE_PARQUET_WITH_SCHEMA_MERGING=true` - [ ] Test Parquet reads for `LIST<STRUCT<int, string>>`: (Refer to https://github.com/rapidsai/cudf/issues/13664) - [ ] Test [SPARK-16344](https://issues.apache.org/jira/browse/SPARK-16344): `ARRAY<STRUCT<array_element: INT>>`. Refer to test in `ParquetHiveCompatibility.scala#139` - [ ] Test CPU fallback for user-defined types in ORC read/write: OrcQuerySuite.scala#L108 - [ ] Test ORC reads at scale with all null values: Like OrcQuerySuite.scala#L173, but with large number of rows. - [ ] Test [SPARK-16610](https://issues.apache.org/jira/browse/SPARK-16610): Honour `orc.compress` on writes, when `compress` is unset: OrcQuerySuite.scala#L189. - [ ] Test that `compress` is honoured when set (ZLIB, Snappy, None). OrcQuerySuite.scala#L224 - [ ] Test [SPARK-5309](https://issues.apache.org/jira/browse/SPARK-5309): ORC STRING column uses dictionary compression. OrcQuerySuite.scala#359 (Note: Not sure how this test is verifying dictionary_v2). - [ ] Test [SPARK-9170](https://issues.apache.org/jira/browse/SPARK-9170): Upper case ORC column names are not implicitly stored in lowercase. (OrcQuerySuite.scala:371)
non_main
compatibility tests for data formats here is a list of tests to confirm data format compatibility with apache spark in the spark rapids plugin this list is a work in progress test spark timestamps read incorrectly refer to the test in parquethivecompatibility scala test parquet reads with convert metastore parquet with schema merging true test parquet reads for list refer to test array refer to test in parquethivecompatibility scala test cpu fallback for user defined types in orc read write orcquerysuite scala test orc reads at scale with all null values like orcquerysuite scala but with large number of rows test honour orc compress on writes when compress is unset orcquerysuite scala test that compress is honoured when set zlib snappy none orcquerysuite scala test orc string column uses dictionary compression orcquerysuite scala note not sure how this test is verifying dictionary test upper case orc column names are not implicitly stored in lowercase orcquerysuite scala
0
42,658
22,758,955,863
IssuesEvent
2022-07-07 19:11:59
scylladb/scylla
https://api.github.com/repos/scylladb/scylla
closed
Schema change statements are slow due to memtable flush latency
performance
_Installation details_ Scylla version (or git commit hash): any Executing DDL statements takes significantly more time on Scylla than on Cassandra. For instance, `drop keyspace` takes about a second on an idle S\* server. I traced that down to latency of the flush of schema tables. The `create keyspace` statement is noticeably faster than `drop keyspace` because it flushes much fewer tables. It looks like the latency comes mainly from a large number of `fdatasync` calls which we execute sequentially during schema tables flush (I counted 77 calls). When I disable them, `drop keyspace` time drops down to about 100ms. Maybe some of them could be avoided or parallelized. Here's a detailed trace during `drop keyspace`: ``` TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Taking the merge lock TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Took the merge lock TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Reading old schema TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Applying schema changes TRACE 2016-07-15 17:41:00,019 [shard 0] database - apply {system.schema_keyspaces key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,019 [shard 0] database - apply {system.schema_columnfamilies key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_columns key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_triggers key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_usertypes key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.IndexInfo key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] schema_tables - Flushing {9f5c6374-d485-3229-9a0a-5094af9ad1e3, b0f22357-4458-3cdb-9631-c43e59ce3676, 0359bc71-7123-3ee1-9a4a-b9dfb11fc125, 296e9c04-9bec-3085-827d-c17d3df2122a, 3aa75225-4f82-350b-8d5c-430fa221fa0a, 45f5b360-24bc-3f83-a363-1034ea4fa697} DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of IndexInfo.system, partitions: 1, occupancy: 0.14%, 376 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db DEBUG 2016-07-15 17:41:00,020 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-TOC.txt.tmp DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_keyspaces.system, partitions: 1, occupancy: 0.14%, 376 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_triggers.system, partitions: 2, occupancy: 0.29%, 752 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_columns.system, partitions: 2, occupancy: 31.28%, 81992 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_usertypes.system, partitions: 2, occupancy: 0.29%, 752 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_columnfamilies.system, partitions: 2, occupancy: 14.61%, 38312 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db DEBUG 2016-07-15 17:41:00,021 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-TOC.txt.tmp TRACE 2016-07-15 17:41:00,022 [shard 0] seastar - starting flush, id=149 TRACE 2016-07-15 17:41:00,022 seastar - running fdatasync() from 0 id=149 TRACE 2016-07-15 17:41:00,022 [shard 0] seastar - starting flush, id=150 TRACE 2016-07-15 17:41:00,066 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,066 seastar - running fdatasync() from 0 id=150 TRACE 2016-07-15 17:41:00,066 [shard 0] seastar - flush done, id=149 TRACE 2016-07-15 17:41:00,077 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - flush done, id=150 TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - starting flush, id=151 TRACE 2016-07-15 17:41:00,077 seastar - running fdatasync() from 0 id=151 TRACE 2016-07-15 17:41:00,077 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - flush done, id=151 TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - starting flush, id=152 TRACE 2016-07-15 17:41:00,077 seastar - running fdatasync() from 0 id=152 TRACE 2016-07-15 17:41:00,078 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,078 [shard 0] seastar - flush done, id=152 TRACE 2016-07-15 17:41:00,078 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: end of stream TRACE 2016-07-15 17:41:00,078 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: end of stream TRACE 2016-07-15 17:41:00,085 [shard 0] seastar - starting flush, id=153 TRACE 2016-07-15 17:41:00,085 seastar - running fdatasync() from 0 id=153 TRACE 2016-07-15 17:41:00,085 [shard 0] seastar - starting flush, id=154 TRACE 2016-07-15 17:41:00,113 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,113 seastar - running fdatasync() from 0 id=154 TRACE 2016-07-15 17:41:00,113 [shard 0] seastar - flush done, id=153 TRACE 2016-07-15 17:41:00,125 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,126 [shard 0] seastar - flush done, id=154 TRACE 2016-07-15 17:41:00,126 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,126 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,130 [shard 0] seastar - starting flush, id=155 TRACE 2016-07-15 17:41:00,130 seastar - running fdatasync() from 0 id=155 TRACE 2016-07-15 17:41:00,130 [shard 0] seastar - starting flush, id=156 TRACE 2016-07-15 17:41:00,142 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,142 seastar - running fdatasync() from 0 id=156 TRACE 2016-07-15 17:41:00,142 [shard 0] seastar - flush done, id=155 TRACE 2016-07-15 17:41:00,156 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,156 [shard 0] seastar - flush done, id=156 DEBUG 2016-07-15 17:41:00,156 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Digest.sha1 DEBUG 2016-07-15 17:41:00,156 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Digest.sha1 TRACE 2016-07-15 17:41:00,163 [shard 0] seastar - starting flush, id=157 TRACE 2016-07-15 17:41:00,163 seastar - running fdatasync() from 0 id=157 TRACE 2016-07-15 17:41:00,164 [shard 0] seastar - starting flush, id=158 TRACE 2016-07-15 17:41:00,192 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,192 seastar - running fdatasync() from 0 id=158 TRACE 2016-07-15 17:41:00,192 [shard 0] seastar - flush done, id=157 TRACE 2016-07-15 17:41:00,207 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,207 [shard 0] seastar - flush done, id=158 DEBUG 2016-07-15 17:41:00,207 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-CRC.db DEBUG 2016-07-15 17:41:00,207 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-CRC.db TRACE 2016-07-15 17:41:00,213 [shard 0] seastar - starting flush, id=159 TRACE 2016-07-15 17:41:00,213 seastar - running fdatasync() from 0 id=159 TRACE 2016-07-15 17:41:00,213 [shard 0] seastar - starting flush, id=160 TRACE 2016-07-15 17:41:00,239 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,239 seastar - running fdatasync() from 0 id=160 TRACE 2016-07-15 17:41:00,239 [shard 0] seastar - flush done, id=159 TRACE 2016-07-15 17:41:00,244 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,244 [shard 0] seastar - flush done, id=160 TRACE 2016-07-15 17:41:00,244 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:00,244 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Summary.db TRACE 2016-07-15 17:41:00,244 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:00,244 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Summary.db TRACE 2016-07-15 17:41:00,248 [shard 0] seastar - starting flush, id=161 TRACE 2016-07-15 17:41:00,248 seastar - running fdatasync() from 0 id=161 TRACE 2016-07-15 17:41:00,248 [shard 0] seastar - starting flush, id=162 TRACE 2016-07-15 17:41:00,273 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,273 seastar - running fdatasync() from 0 id=162 TRACE 2016-07-15 17:41:00,273 [shard 0] seastar - flush done, id=161 TRACE 2016-07-15 17:41:00,286 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,286 [shard 0] seastar - flush done, id=162 DEBUG 2016-07-15 17:41:00,286 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Filter.db DEBUG 2016-07-15 17:41:00,286 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Filter.db TRACE 2016-07-15 17:41:00,291 [shard 0] seastar - starting flush, id=163 TRACE 2016-07-15 17:41:00,291 seastar - running fdatasync() from 0 id=163 TRACE 2016-07-15 17:41:00,291 [shard 0] seastar - starting flush, id=164 TRACE 2016-07-15 17:41:00,317 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,317 seastar - running fdatasync() from 0 id=164 TRACE 2016-07-15 17:41:00,317 [shard 0] seastar - flush done, id=163 TRACE 2016-07-15 17:41:00,328 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,328 [shard 0] seastar - flush done, id=164 DEBUG 2016-07-15 17:41:00,329 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Statistics.db DEBUG 2016-07-15 17:41:00,329 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Statistics.db TRACE 2016-07-15 17:41:00,334 [shard 0] seastar - starting flush, id=165 TRACE 2016-07-15 17:41:00,334 seastar - running fdatasync() from 0 id=165 TRACE 2016-07-15 17:41:00,334 [shard 0] seastar - starting flush, id=166 TRACE 2016-07-15 17:41:00,359 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,359 seastar - running fdatasync() from 0 id=166 TRACE 2016-07-15 17:41:00,359 [shard 0] seastar - flush done, id=165 TRACE 2016-07-15 17:41:00,367 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - flush done, id=166 TRACE 2016-07-15 17:41:00,367 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: sealing TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=167 TRACE 2016-07-15 17:41:00,367 seastar - running fdatasync() from 0 id=167 TRACE 2016-07-15 17:41:00,367 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,367 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: sealing TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - flush done, id=167 TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=168 TRACE 2016-07-15 17:41:00,367 seastar - running fdatasync() from 0 id=168 TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=169 TRACE 2016-07-15 17:41:00,386 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,386 seastar - running fdatasync() from 0 id=169 TRACE 2016-07-15 17:41:00,386 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - flush done, id=168 TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - flush done, id=169 TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - starting flush, id=170 TRACE 2016-07-15 17:41:00,386 seastar - running fdatasync() from 0 id=170 DEBUG 2016-07-15 17:41:00,386 [shard 0] sstable - SSTable with generation 182 of system.schema_keyspaces was sealed successfully. TRACE 2016-07-15 17:41:00,386 [shard 0] database - Written. Opening the sstable... TRACE 2016-07-15 17:41:00,395 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,395 [shard 0] seastar - flush done, id=170 DEBUG 2016-07-15 17:41:00,396 [shard 0] sstable - SSTable with generation 82 of system.IndexInfo was sealed successfully. TRACE 2016-07-15 17:41:00,396 [shard 0] database - Written. Opening the sstable... DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db done DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db replaced DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db DEBUG 2016-07-15 17:41:00,396 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-TOC.txt.tmp DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db done DEBUG 2016-07-15 17:41:00,397 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db replaced DEBUG 2016-07-15 17:41:00,397 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db DEBUG 2016-07-15 17:41:00,397 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-TOC.txt.tmp TRACE 2016-07-15 17:41:00,397 [shard 0] seastar - starting flush, id=171 TRACE 2016-07-15 17:41:00,397 seastar - running fdatasync() from 0 id=171 TRACE 2016-07-15 17:41:00,398 [shard 0] seastar - starting flush, id=172 TRACE 2016-07-15 17:41:00,415 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,415 seastar - running fdatasync() from 0 id=172 TRACE 2016-07-15 17:41:00,415 [shard 0] seastar - flush done, id=171 TRACE 2016-07-15 17:41:00,424 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,424 [shard 0] seastar - flush done, id=172 TRACE 2016-07-15 17:41:00,424 [shard 0] seastar - starting flush, id=173 TRACE 2016-07-15 17:41:00,425 seastar - running fdatasync() from 0 id=173 TRACE 2016-07-15 17:41:00,425 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - flush done, id=173 TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - starting flush, id=174 TRACE 2016-07-15 17:41:00,425 seastar - running fdatasync() from 0 id=174 TRACE 2016-07-15 17:41:00,425 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - flush done, id=174 TRACE 2016-07-15 17:41:00,425 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: end of stream TRACE 2016-07-15 17:41:00,426 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: end of stream TRACE 2016-07-15 17:41:00,431 [shard 0] seastar - starting flush, id=175 TRACE 2016-07-15 17:41:00,431 seastar - running fdatasync() from 0 id=175 TRACE 2016-07-15 17:41:00,431 [shard 0] seastar - starting flush, id=176 TRACE 2016-07-15 17:41:00,456 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,456 seastar - running fdatasync() from 0 id=176 TRACE 2016-07-15 17:41:00,456 [shard 0] seastar - flush done, id=175 TRACE 2016-07-15 17:41:00,464 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,464 [shard 0] seastar - flush done, id=176 TRACE 2016-07-15 17:41:00,464 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,464 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,471 [shard 0] seastar - starting flush, id=177 TRACE 2016-07-15 17:41:00,471 seastar - running fdatasync() from 0 id=177 TRACE 2016-07-15 17:41:00,471 [shard 0] seastar - starting flush, id=178 TRACE 2016-07-15 17:41:00,490 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,490 seastar - running fdatasync() from 0 id=178 TRACE 2016-07-15 17:41:00,490 [shard 0] seastar - flush done, id=177 TRACE 2016-07-15 17:41:00,497 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,497 [shard 0] seastar - flush done, id=178 DEBUG 2016-07-15 17:41:00,497 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Digest.sha1 DEBUG 2016-07-15 17:41:00,498 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Digest.sha1 TRACE 2016-07-15 17:41:00,500 [shard 0] seastar - starting flush, id=179 TRACE 2016-07-15 17:41:00,500 seastar - running fdatasync() from 0 id=179 TRACE 2016-07-15 17:41:00,500 [shard 0] seastar - starting flush, id=180 TRACE 2016-07-15 17:41:00,528 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,528 seastar - running fdatasync() from 0 id=180 TRACE 2016-07-15 17:41:00,528 [shard 0] seastar - flush done, id=179 TRACE 2016-07-15 17:41:00,540 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,540 [shard 0] seastar - flush done, id=180 DEBUG 2016-07-15 17:41:00,540 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-CRC.db DEBUG 2016-07-15 17:41:00,541 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-CRC.db TRACE 2016-07-15 17:41:00,547 [shard 0] seastar - starting flush, id=181 TRACE 2016-07-15 17:41:00,547 seastar - running fdatasync() from 0 id=181 TRACE 2016-07-15 17:41:00,547 [shard 0] seastar - starting flush, id=182 TRACE 2016-07-15 17:41:00,565 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,565 seastar - running fdatasync() from 0 id=182 TRACE 2016-07-15 17:41:00,565 [shard 0] seastar - flush done, id=181 TRACE 2016-07-15 17:41:00,575 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,575 [shard 0] seastar - flush done, id=182 TRACE 2016-07-15 17:41:00,575 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:00,575 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Summary.db TRACE 2016-07-15 17:41:00,575 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:00,575 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Summary.db TRACE 2016-07-15 17:41:00,581 [shard 0] seastar - starting flush, id=183 TRACE 2016-07-15 17:41:00,581 seastar - running fdatasync() from 0 id=183 TRACE 2016-07-15 17:41:00,581 [shard 0] seastar - starting flush, id=184 TRACE 2016-07-15 17:41:00,607 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,607 seastar - running fdatasync() from 0 id=184 TRACE 2016-07-15 17:41:00,607 [shard 0] seastar - flush done, id=183 TRACE 2016-07-15 17:41:00,616 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,616 [shard 0] seastar - flush done, id=184 DEBUG 2016-07-15 17:41:00,616 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Filter.db DEBUG 2016-07-15 17:41:00,617 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Filter.db TRACE 2016-07-15 17:41:00,624 [shard 0] seastar - starting flush, id=185 TRACE 2016-07-15 17:41:00,625 seastar - running fdatasync() from 0 id=185 TRACE 2016-07-15 17:41:00,625 [shard 0] seastar - starting flush, id=186 TRACE 2016-07-15 17:41:00,657 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,657 seastar - running fdatasync() from 0 id=186 TRACE 2016-07-15 17:41:00,657 [shard 0] seastar - flush done, id=185 TRACE 2016-07-15 17:41:00,673 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,673 [shard 0] seastar - flush done, id=186 DEBUG 2016-07-15 17:41:00,673 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Statistics.db DEBUG 2016-07-15 17:41:00,674 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Statistics.db TRACE 2016-07-15 17:41:00,680 [shard 0] seastar - starting flush, id=187 TRACE 2016-07-15 17:41:00,680 seastar - running fdatasync() from 0 id=187 TRACE 2016-07-15 17:41:00,680 [shard 0] seastar - starting flush, id=188 TRACE 2016-07-15 17:41:00,703 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,704 seastar - running fdatasync() from 0 id=188 TRACE 2016-07-15 17:41:00,704 [shard 0] seastar - flush done, id=187 TRACE 2016-07-15 17:41:00,712 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,712 [shard 0] seastar - flush done, id=188 TRACE 2016-07-15 17:41:00,713 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: sealing TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=189 TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=189 TRACE 2016-07-15 17:41:00,713 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,713 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: sealing TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - flush done, id=189 TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=190 TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=190 TRACE 2016-07-15 17:41:00,713 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - flush done, id=190 TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=191 TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=191 TRACE 2016-07-15 17:41:00,728 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,728 [shard 0] seastar - flush done, id=191 TRACE 2016-07-15 17:41:00,728 [shard 0] seastar - starting flush, id=192 TRACE 2016-07-15 17:41:00,728 seastar - running fdatasync() from 0 id=192 TRACE 2016-07-15 17:41:00,749 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,749 [shard 0] seastar - flush done, id=192 DEBUG 2016-07-15 17:41:00,749 [shard 0] sstable - SSTable with generation 84 of system.schema_triggers was sealed successfully. TRACE 2016-07-15 17:41:00,749 [shard 0] database - Written. Opening the sstable... DEBUG 2016-07-15 17:41:00,749 [shard 0] sstable - SSTable with generation 95 of system.schema_columns was sealed successfully. TRACE 2016-07-15 17:41:00,750 [shard 0] database - Written. Opening the sstable... DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db done INFO 2016-07-15 17:41:00,750 [shard 0] compaction - Compacting [/home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-81-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_tr iggers-ka-82-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-83-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db:level=0, ] DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db replaced DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db DEBUG 2016-07-15 17:41:00,750 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-TOC.txt.tmp DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db done DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db replaced DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db DEBUG 2016-07-15 17:41:00,751 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-TOC.txt.tmp TRACE 2016-07-15 17:41:00,753 [shard 0] seastar - starting flush, id=193 TRACE 2016-07-15 17:41:00,753 seastar - running fdatasync() from 0 id=193 TRACE 2016-07-15 17:41:00,753 [shard 0] seastar - starting flush, id=194 DEBUG 2016-07-15 17:41:00,753 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-TOC.txt.tmp TRACE 2016-07-15 17:41:00,772 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,772 seastar - running fdatasync() from 0 id=194 TRACE 2016-07-15 17:41:00,772 [shard 0] seastar - flush done, id=193 TRACE 2016-07-15 17:41:00,786 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,786 [shard 0] seastar - flush done, id=194 TRACE 2016-07-15 17:41:00,786 [shard 0] seastar - starting flush, id=195 TRACE 2016-07-15 17:41:00,786 seastar - running fdatasync() from 0 id=195 TRACE 2016-07-15 17:41:00,786 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - flush done, id=195 TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - starting flush, id=196 TRACE 2016-07-15 17:41:00,787 seastar - running fdatasync() from 0 id=196 TRACE 2016-07-15 17:41:00,787 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - flush done, id=196 TRACE 2016-07-15 17:41:00,787 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: end of stream TRACE 2016-07-15 17:41:00,788 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: end of stream TRACE 2016-07-15 17:41:00,806 [shard 0] seastar - starting flush, id=197 TRACE 2016-07-15 17:41:00,809 seastar - running fdatasync() from 0 id=197 TRACE 2016-07-15 17:41:00,809 [shard 0] seastar - starting flush, id=198 TRACE 2016-07-15 17:41:00,809 [shard 0] seastar - starting flush, id=199 TRACE 2016-07-15 17:41:00,867 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,867 seastar - running fdatasync() from 0 id=198 TRACE 2016-07-15 17:41:00,867 [shard 0] seastar - flush done, id=197 TRACE 2016-07-15 17:41:00,873 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,873 seastar - running fdatasync() from 0 id=199 TRACE 2016-07-15 17:41:00,873 [shard 0] seastar - flush done, id=198 TRACE 2016-07-15 17:41:00,884 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - flush done, id=199 TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - starting flush, id=200 TRACE 2016-07-15 17:41:00,885 seastar - running fdatasync() from 0 id=200 TRACE 2016-07-15 17:41:00,885 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - flush done, id=200 TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: end of stream TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=201 TRACE 2016-07-15 17:41:00,887 seastar - running fdatasync() from 0 id=201 TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=202 TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=203 TRACE 2016-07-15 17:41:00,920 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,920 seastar - running fdatasync() from 0 id=202 TRACE 2016-07-15 17:41:00,920 [shard 0] seastar - flush done, id=201 TRACE 2016-07-15 17:41:00,947 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,947 seastar - running fdatasync() from 0 id=203 TRACE 2016-07-15 17:41:00,956 [shard 0] seastar - flush done, id=202 TRACE 2016-07-15 17:41:00,963 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,963 [shard 0] seastar - flush done, id=203 DEBUG 2016-07-15 17:41:00,963 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Digest.sha1 DEBUG 2016-07-15 17:41:00,964 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Digest.sha1 TRACE 2016-07-15 17:41:00,964 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=204 TRACE 2016-07-15 17:41:00,968 seastar - running fdatasync() from 0 id=204 TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=205 TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=206 TRACE 2016-07-15 17:41:00,986 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,986 seastar - running fdatasync() from 0 id=205 TRACE 2016-07-15 17:41:00,986 [shard 0] seastar - flush done, id=204 TRACE 2016-07-15 17:41:01,002 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,002 seastar - running fdatasync() from 0 id=206 TRACE 2016-07-15 17:41:01,006 [shard 0] seastar - flush done, id=205 TRACE 2016-07-15 17:41:01,007 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,007 [shard 0] seastar - flush done, id=206 DEBUG 2016-07-15 17:41:01,007 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-CRC.db DEBUG 2016-07-15 17:41:01,008 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Digest.sha1 DEBUG 2016-07-15 17:41:01,008 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-CRC.db TRACE 2016-07-15 17:41:01,009 [shard 0] seastar - starting flush, id=207 TRACE 2016-07-15 17:41:01,016 [shard 0] seastar - starting flush, id=208 TRACE 2016-07-15 17:41:01,022 seastar - running fdatasync() from 0 id=207 TRACE 2016-07-15 17:41:01,022 [shard 0] seastar - starting flush, id=209 TRACE 2016-07-15 17:41:01,047 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,047 seastar - running fdatasync() from 0 id=208 TRACE 2016-07-15 17:41:01,056 [shard 0] seastar - flush done, id=207 TRACE 2016-07-15 17:41:01,062 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,062 seastar - running fdatasync() from 0 id=209 TRACE 2016-07-15 17:41:01,062 [shard 0] seastar - flush done, id=208 TRACE 2016-07-15 17:41:01,076 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,076 [shard 0] seastar - flush done, id=209 DEBUG 2016-07-15 17:41:01,076 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-CRC.db TRACE 2016-07-15 17:41:01,076 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:01,076 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Summary.db TRACE 2016-07-15 17:41:01,077 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:01,077 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Summary.db TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=210 TRACE 2016-07-15 17:41:01,086 seastar - running fdatasync() from 0 id=210 TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=211 TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=212 TRACE 2016-07-15 17:41:01,118 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,118 seastar - running fdatasync() from 0 id=211 TRACE 2016-07-15 17:41:01,118 [shard 0] seastar - flush done, id=210 TRACE 2016-07-15 17:41:01,125 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,125 seastar - running fdatasync() from 0 id=212 TRACE 2016-07-15 17:41:01,127 [shard 0] seastar - flush done, id=211 TRACE 2016-07-15 17:41:01,137 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,137 [shard 0] seastar - flush done, id=212 TRACE 2016-07-15 17:41:01,137 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Summary.db DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Filter.db DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Filter.db TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=213 TRACE 2016-07-15 17:41:01,143 seastar - running fdatasync() from 0 id=213 TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=214 TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=215 TRACE 2016-07-15 17:41:01,175 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,175 seastar - running fdatasync() from 0 id=214 TRACE 2016-07-15 17:41:01,175 [shard 0] seastar - flush done, id=213 TRACE 2016-07-15 17:41:01,184 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,184 seastar - running fdatasync() from 0 id=215 TRACE 2016-07-15 17:41:01,188 [shard 0] seastar - flush done, id=214 TRACE 2016-07-15 17:41:01,194 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,194 [shard 0] seastar - flush done, id=215 DEBUG 2016-07-15 17:41:01,194 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Filter.db DEBUG 2016-07-15 17:41:01,195 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Statistics.db DEBUG 2016-07-15 17:41:01,195 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Statistics.db TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=216 TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=217 TRACE 2016-07-15 17:41:01,201 seastar - running fdatasync() from 0 id=216 TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=218 TRACE 2016-07-15 17:41:01,235 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,235 seastar - running fdatasync() from 0 id=217 TRACE 2016-07-15 17:41:01,238 [shard 0] seastar - flush done, id=216 TRACE 2016-07-15 17:41:01,243 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,243 seastar - running fdatasync() from 0 id=218 TRACE 2016-07-15 17:41:01,243 [shard 0] seastar - flush done, id=217 TRACE 2016-07-15 17:41:01,255 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,255 [shard 0] seastar - flush done, id=218 DEBUG 2016-07-15 17:41:01,255 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Statistics.db TRACE 2016-07-15 17:41:01,256 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: sealing TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=219 TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=219 TRACE 2016-07-15 17:41:01,256 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,256 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: sealing TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - flush done, id=219 TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=220 TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=220 TRACE 2016-07-15 17:41:01,256 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - flush done, id=220 TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=221 TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=221 TRACE 2016-07-15 17:41:01,280 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,280 [shard 0] seastar - flush done, id=221 TRACE 2016-07-15 17:41:01,280 [shard 0] seastar - starting flush, id=222 TRACE 2016-07-15 17:41:01,281 seastar - running fdatasync() from 0 id=222 TRACE 2016-07-15 17:41:01,281 [shard 0] seastar - starting flush, id=223 TRACE 2016-07-15 17:41:01,293 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,294 seastar - running fdatasync() from 0 id=223 TRACE 2016-07-15 17:41:01,294 [shard 0] seastar - flush done, id=222 DEBUG 2016-07-15 17:41:01,294 [shard 0] sstable - SSTable with generation 84 of system.schema_usertypes was sealed successfully. TRACE 2016-07-15 17:41:01,294 [shard 0] database - Written. Opening the sstable... TRACE 2016-07-15 17:41:01,310 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - flush done, id=223 DEBUG 2016-07-15 17:41:01,310 [shard 0] sstable - SSTable with generation 95 of system.schema_columnfamilies was sealed successfully. TRACE 2016-07-15 17:41:01,310 [shard 0] database - Written. Opening the sstable... TRACE 2016-07-15 17:41:01,310 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: sealing TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - starting flush, id=224 TRACE 2016-07-15 17:41:01,310 seastar - running fdatasync() from 0 id=224 TRACE 2016-07-15 17:41:01,310 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - flush done, id=224 TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - starting flush, id=225 TRACE 2016-07-15 17:41:01,310 seastar - running fdatasync() from 0 id=225 TRACE 2016-07-15 17:41:01,324 [shard 0] query_processor - execute_internal: "INSERT INTO system.peers (peer, schema_version) VALUES (?, ?)" (127.0.0.3, 67d1e0b4-d995-38fa-9e92-075d046a09fe) TRACE 2016-07-15 17:41:01,324 [shard 0] database - apply {system.peers key {key: pk{00047f000003}, token:-4598924402677416620} data {mutation_partition: {tombstone: none} () static {row: } clustered {rows_entry: ckp{} {deletable_row: {row_marker 1468597261324000 0 0} {tombstone: none} {row: {column: 6 01000537ae72144e e067d1e0b4d99538fa9e92075d046a09fe}}}}}} DEBUG 2016-07-15 17:41:01,325 [shard 0] migration_manager - Submitting migration task for 127.0.0.3 TRACE 2016-07-15 17:41:01,327 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,327 [shard 0] seastar - flush done, id=225 DEBUG 2016-07-15 17:41:01,327 [shard 0] sstable - SSTable with generation 85 of system.schema_triggers was sealed successfully. DEBUG 2016-07-15 17:41:01,327 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db done INFO 2016-07-15 17:41:01,327 [shard 0] compaction - Compacting [/home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-81-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema _usertypes-ka-82-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-83-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db:level= 0, ] DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db replaced DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db done DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db replaced TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Reading new schema TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging keyspaces INFO 2016-07-15 17:41:01,328 [shard 0] schema_tables - Dropping keyspace testxyz TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging tables TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging types TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Dropping keyspaces TRACE 2016-07-15 17:41:01,329 [shard 0] schema_tables - Schema merged ```
True
Schema change statements are slow due to memtable flush latency - _Installation details_ Scylla version (or git commit hash): any Executing DDL statements takes significantly more time on Scylla than on Cassandra. For instance, `drop keyspace` takes about a second on an idle S\* server. I traced that down to latency of the flush of schema tables. The `create keyspace` statement is noticeably faster than `drop keyspace` because it flushes much fewer tables. It looks like the latency comes mainly from a large number of `fdatasync` calls which we execute sequentially during schema tables flush (I counted 77 calls). When I disable them, `drop keyspace` time drops down to about 100ms. Maybe some of them could be avoided or parallelized. Here's a detailed trace during `drop keyspace`: ``` TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Taking the merge lock TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Took the merge lock TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Reading old schema TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Applying schema changes TRACE 2016-07-15 17:41:00,019 [shard 0] database - apply {system.schema_keyspaces key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,019 [shard 0] database - apply {system.schema_columnfamilies key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_columns key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_triggers key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_usertypes key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.IndexInfo key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }} TRACE 2016-07-15 17:41:00,020 [shard 0] schema_tables - Flushing {9f5c6374-d485-3229-9a0a-5094af9ad1e3, b0f22357-4458-3cdb-9631-c43e59ce3676, 0359bc71-7123-3ee1-9a4a-b9dfb11fc125, 296e9c04-9bec-3085-827d-c17d3df2122a, 3aa75225-4f82-350b-8d5c-430fa221fa0a, 45f5b360-24bc-3f83-a363-1034ea4fa697} DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of IndexInfo.system, partitions: 1, occupancy: 0.14%, 376 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db DEBUG 2016-07-15 17:41:00,020 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-TOC.txt.tmp DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_keyspaces.system, partitions: 1, occupancy: 0.14%, 376 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_triggers.system, partitions: 2, occupancy: 0.29%, 752 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_columns.system, partitions: 2, occupancy: 31.28%, 81992 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_usertypes.system, partitions: 2, occupancy: 0.29%, 752 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_columnfamilies.system, partitions: 2, occupancy: 14.61%, 38312 / 262144 [B] DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db DEBUG 2016-07-15 17:41:00,021 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-TOC.txt.tmp TRACE 2016-07-15 17:41:00,022 [shard 0] seastar - starting flush, id=149 TRACE 2016-07-15 17:41:00,022 seastar - running fdatasync() from 0 id=149 TRACE 2016-07-15 17:41:00,022 [shard 0] seastar - starting flush, id=150 TRACE 2016-07-15 17:41:00,066 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,066 seastar - running fdatasync() from 0 id=150 TRACE 2016-07-15 17:41:00,066 [shard 0] seastar - flush done, id=149 TRACE 2016-07-15 17:41:00,077 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - flush done, id=150 TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - starting flush, id=151 TRACE 2016-07-15 17:41:00,077 seastar - running fdatasync() from 0 id=151 TRACE 2016-07-15 17:41:00,077 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - flush done, id=151 TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - starting flush, id=152 TRACE 2016-07-15 17:41:00,077 seastar - running fdatasync() from 0 id=152 TRACE 2016-07-15 17:41:00,078 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,078 [shard 0] seastar - flush done, id=152 TRACE 2016-07-15 17:41:00,078 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: end of stream TRACE 2016-07-15 17:41:00,078 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: end of stream TRACE 2016-07-15 17:41:00,085 [shard 0] seastar - starting flush, id=153 TRACE 2016-07-15 17:41:00,085 seastar - running fdatasync() from 0 id=153 TRACE 2016-07-15 17:41:00,085 [shard 0] seastar - starting flush, id=154 TRACE 2016-07-15 17:41:00,113 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,113 seastar - running fdatasync() from 0 id=154 TRACE 2016-07-15 17:41:00,113 [shard 0] seastar - flush done, id=153 TRACE 2016-07-15 17:41:00,125 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,126 [shard 0] seastar - flush done, id=154 TRACE 2016-07-15 17:41:00,126 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,126 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,130 [shard 0] seastar - starting flush, id=155 TRACE 2016-07-15 17:41:00,130 seastar - running fdatasync() from 0 id=155 TRACE 2016-07-15 17:41:00,130 [shard 0] seastar - starting flush, id=156 TRACE 2016-07-15 17:41:00,142 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,142 seastar - running fdatasync() from 0 id=156 TRACE 2016-07-15 17:41:00,142 [shard 0] seastar - flush done, id=155 TRACE 2016-07-15 17:41:00,156 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,156 [shard 0] seastar - flush done, id=156 DEBUG 2016-07-15 17:41:00,156 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Digest.sha1 DEBUG 2016-07-15 17:41:00,156 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Digest.sha1 TRACE 2016-07-15 17:41:00,163 [shard 0] seastar - starting flush, id=157 TRACE 2016-07-15 17:41:00,163 seastar - running fdatasync() from 0 id=157 TRACE 2016-07-15 17:41:00,164 [shard 0] seastar - starting flush, id=158 TRACE 2016-07-15 17:41:00,192 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,192 seastar - running fdatasync() from 0 id=158 TRACE 2016-07-15 17:41:00,192 [shard 0] seastar - flush done, id=157 TRACE 2016-07-15 17:41:00,207 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,207 [shard 0] seastar - flush done, id=158 DEBUG 2016-07-15 17:41:00,207 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-CRC.db DEBUG 2016-07-15 17:41:00,207 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-CRC.db TRACE 2016-07-15 17:41:00,213 [shard 0] seastar - starting flush, id=159 TRACE 2016-07-15 17:41:00,213 seastar - running fdatasync() from 0 id=159 TRACE 2016-07-15 17:41:00,213 [shard 0] seastar - starting flush, id=160 TRACE 2016-07-15 17:41:00,239 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,239 seastar - running fdatasync() from 0 id=160 TRACE 2016-07-15 17:41:00,239 [shard 0] seastar - flush done, id=159 TRACE 2016-07-15 17:41:00,244 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,244 [shard 0] seastar - flush done, id=160 TRACE 2016-07-15 17:41:00,244 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:00,244 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Summary.db TRACE 2016-07-15 17:41:00,244 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:00,244 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Summary.db TRACE 2016-07-15 17:41:00,248 [shard 0] seastar - starting flush, id=161 TRACE 2016-07-15 17:41:00,248 seastar - running fdatasync() from 0 id=161 TRACE 2016-07-15 17:41:00,248 [shard 0] seastar - starting flush, id=162 TRACE 2016-07-15 17:41:00,273 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,273 seastar - running fdatasync() from 0 id=162 TRACE 2016-07-15 17:41:00,273 [shard 0] seastar - flush done, id=161 TRACE 2016-07-15 17:41:00,286 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,286 [shard 0] seastar - flush done, id=162 DEBUG 2016-07-15 17:41:00,286 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Filter.db DEBUG 2016-07-15 17:41:00,286 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Filter.db TRACE 2016-07-15 17:41:00,291 [shard 0] seastar - starting flush, id=163 TRACE 2016-07-15 17:41:00,291 seastar - running fdatasync() from 0 id=163 TRACE 2016-07-15 17:41:00,291 [shard 0] seastar - starting flush, id=164 TRACE 2016-07-15 17:41:00,317 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,317 seastar - running fdatasync() from 0 id=164 TRACE 2016-07-15 17:41:00,317 [shard 0] seastar - flush done, id=163 TRACE 2016-07-15 17:41:00,328 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,328 [shard 0] seastar - flush done, id=164 DEBUG 2016-07-15 17:41:00,329 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Statistics.db DEBUG 2016-07-15 17:41:00,329 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Statistics.db TRACE 2016-07-15 17:41:00,334 [shard 0] seastar - starting flush, id=165 TRACE 2016-07-15 17:41:00,334 seastar - running fdatasync() from 0 id=165 TRACE 2016-07-15 17:41:00,334 [shard 0] seastar - starting flush, id=166 TRACE 2016-07-15 17:41:00,359 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,359 seastar - running fdatasync() from 0 id=166 TRACE 2016-07-15 17:41:00,359 [shard 0] seastar - flush done, id=165 TRACE 2016-07-15 17:41:00,367 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - flush done, id=166 TRACE 2016-07-15 17:41:00,367 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: sealing TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=167 TRACE 2016-07-15 17:41:00,367 seastar - running fdatasync() from 0 id=167 TRACE 2016-07-15 17:41:00,367 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,367 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: sealing TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - flush done, id=167 TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=168 TRACE 2016-07-15 17:41:00,367 seastar - running fdatasync() from 0 id=168 TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=169 TRACE 2016-07-15 17:41:00,386 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,386 seastar - running fdatasync() from 0 id=169 TRACE 2016-07-15 17:41:00,386 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - flush done, id=168 TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - flush done, id=169 TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - starting flush, id=170 TRACE 2016-07-15 17:41:00,386 seastar - running fdatasync() from 0 id=170 DEBUG 2016-07-15 17:41:00,386 [shard 0] sstable - SSTable with generation 182 of system.schema_keyspaces was sealed successfully. TRACE 2016-07-15 17:41:00,386 [shard 0] database - Written. Opening the sstable... TRACE 2016-07-15 17:41:00,395 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,395 [shard 0] seastar - flush done, id=170 DEBUG 2016-07-15 17:41:00,396 [shard 0] sstable - SSTable with generation 82 of system.IndexInfo was sealed successfully. TRACE 2016-07-15 17:41:00,396 [shard 0] database - Written. Opening the sstable... DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db done DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db replaced DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db DEBUG 2016-07-15 17:41:00,396 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-TOC.txt.tmp DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db done DEBUG 2016-07-15 17:41:00,397 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db replaced DEBUG 2016-07-15 17:41:00,397 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db DEBUG 2016-07-15 17:41:00,397 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-TOC.txt.tmp TRACE 2016-07-15 17:41:00,397 [shard 0] seastar - starting flush, id=171 TRACE 2016-07-15 17:41:00,397 seastar - running fdatasync() from 0 id=171 TRACE 2016-07-15 17:41:00,398 [shard 0] seastar - starting flush, id=172 TRACE 2016-07-15 17:41:00,415 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,415 seastar - running fdatasync() from 0 id=172 TRACE 2016-07-15 17:41:00,415 [shard 0] seastar - flush done, id=171 TRACE 2016-07-15 17:41:00,424 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,424 [shard 0] seastar - flush done, id=172 TRACE 2016-07-15 17:41:00,424 [shard 0] seastar - starting flush, id=173 TRACE 2016-07-15 17:41:00,425 seastar - running fdatasync() from 0 id=173 TRACE 2016-07-15 17:41:00,425 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - flush done, id=173 TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - starting flush, id=174 TRACE 2016-07-15 17:41:00,425 seastar - running fdatasync() from 0 id=174 TRACE 2016-07-15 17:41:00,425 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - flush done, id=174 TRACE 2016-07-15 17:41:00,425 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: end of stream TRACE 2016-07-15 17:41:00,426 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: end of stream TRACE 2016-07-15 17:41:00,431 [shard 0] seastar - starting flush, id=175 TRACE 2016-07-15 17:41:00,431 seastar - running fdatasync() from 0 id=175 TRACE 2016-07-15 17:41:00,431 [shard 0] seastar - starting flush, id=176 TRACE 2016-07-15 17:41:00,456 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,456 seastar - running fdatasync() from 0 id=176 TRACE 2016-07-15 17:41:00,456 [shard 0] seastar - flush done, id=175 TRACE 2016-07-15 17:41:00,464 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,464 [shard 0] seastar - flush done, id=176 TRACE 2016-07-15 17:41:00,464 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,464 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,471 [shard 0] seastar - starting flush, id=177 TRACE 2016-07-15 17:41:00,471 seastar - running fdatasync() from 0 id=177 TRACE 2016-07-15 17:41:00,471 [shard 0] seastar - starting flush, id=178 TRACE 2016-07-15 17:41:00,490 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,490 seastar - running fdatasync() from 0 id=178 TRACE 2016-07-15 17:41:00,490 [shard 0] seastar - flush done, id=177 TRACE 2016-07-15 17:41:00,497 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,497 [shard 0] seastar - flush done, id=178 DEBUG 2016-07-15 17:41:00,497 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Digest.sha1 DEBUG 2016-07-15 17:41:00,498 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Digest.sha1 TRACE 2016-07-15 17:41:00,500 [shard 0] seastar - starting flush, id=179 TRACE 2016-07-15 17:41:00,500 seastar - running fdatasync() from 0 id=179 TRACE 2016-07-15 17:41:00,500 [shard 0] seastar - starting flush, id=180 TRACE 2016-07-15 17:41:00,528 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,528 seastar - running fdatasync() from 0 id=180 TRACE 2016-07-15 17:41:00,528 [shard 0] seastar - flush done, id=179 TRACE 2016-07-15 17:41:00,540 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,540 [shard 0] seastar - flush done, id=180 DEBUG 2016-07-15 17:41:00,540 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-CRC.db DEBUG 2016-07-15 17:41:00,541 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-CRC.db TRACE 2016-07-15 17:41:00,547 [shard 0] seastar - starting flush, id=181 TRACE 2016-07-15 17:41:00,547 seastar - running fdatasync() from 0 id=181 TRACE 2016-07-15 17:41:00,547 [shard 0] seastar - starting flush, id=182 TRACE 2016-07-15 17:41:00,565 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,565 seastar - running fdatasync() from 0 id=182 TRACE 2016-07-15 17:41:00,565 [shard 0] seastar - flush done, id=181 TRACE 2016-07-15 17:41:00,575 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,575 [shard 0] seastar - flush done, id=182 TRACE 2016-07-15 17:41:00,575 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:00,575 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Summary.db TRACE 2016-07-15 17:41:00,575 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:00,575 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Summary.db TRACE 2016-07-15 17:41:00,581 [shard 0] seastar - starting flush, id=183 TRACE 2016-07-15 17:41:00,581 seastar - running fdatasync() from 0 id=183 TRACE 2016-07-15 17:41:00,581 [shard 0] seastar - starting flush, id=184 TRACE 2016-07-15 17:41:00,607 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,607 seastar - running fdatasync() from 0 id=184 TRACE 2016-07-15 17:41:00,607 [shard 0] seastar - flush done, id=183 TRACE 2016-07-15 17:41:00,616 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,616 [shard 0] seastar - flush done, id=184 DEBUG 2016-07-15 17:41:00,616 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Filter.db DEBUG 2016-07-15 17:41:00,617 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Filter.db TRACE 2016-07-15 17:41:00,624 [shard 0] seastar - starting flush, id=185 TRACE 2016-07-15 17:41:00,625 seastar - running fdatasync() from 0 id=185 TRACE 2016-07-15 17:41:00,625 [shard 0] seastar - starting flush, id=186 TRACE 2016-07-15 17:41:00,657 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,657 seastar - running fdatasync() from 0 id=186 TRACE 2016-07-15 17:41:00,657 [shard 0] seastar - flush done, id=185 TRACE 2016-07-15 17:41:00,673 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,673 [shard 0] seastar - flush done, id=186 DEBUG 2016-07-15 17:41:00,673 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Statistics.db DEBUG 2016-07-15 17:41:00,674 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Statistics.db TRACE 2016-07-15 17:41:00,680 [shard 0] seastar - starting flush, id=187 TRACE 2016-07-15 17:41:00,680 seastar - running fdatasync() from 0 id=187 TRACE 2016-07-15 17:41:00,680 [shard 0] seastar - starting flush, id=188 TRACE 2016-07-15 17:41:00,703 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,704 seastar - running fdatasync() from 0 id=188 TRACE 2016-07-15 17:41:00,704 [shard 0] seastar - flush done, id=187 TRACE 2016-07-15 17:41:00,712 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,712 [shard 0] seastar - flush done, id=188 TRACE 2016-07-15 17:41:00,713 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: sealing TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=189 TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=189 TRACE 2016-07-15 17:41:00,713 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,713 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: sealing TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - flush done, id=189 TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=190 TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=190 TRACE 2016-07-15 17:41:00,713 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - flush done, id=190 TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=191 TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=191 TRACE 2016-07-15 17:41:00,728 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,728 [shard 0] seastar - flush done, id=191 TRACE 2016-07-15 17:41:00,728 [shard 0] seastar - starting flush, id=192 TRACE 2016-07-15 17:41:00,728 seastar - running fdatasync() from 0 id=192 TRACE 2016-07-15 17:41:00,749 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,749 [shard 0] seastar - flush done, id=192 DEBUG 2016-07-15 17:41:00,749 [shard 0] sstable - SSTable with generation 84 of system.schema_triggers was sealed successfully. TRACE 2016-07-15 17:41:00,749 [shard 0] database - Written. Opening the sstable... DEBUG 2016-07-15 17:41:00,749 [shard 0] sstable - SSTable with generation 95 of system.schema_columns was sealed successfully. TRACE 2016-07-15 17:41:00,750 [shard 0] database - Written. Opening the sstable... DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db done INFO 2016-07-15 17:41:00,750 [shard 0] compaction - Compacting [/home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-81-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_tr iggers-ka-82-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-83-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db:level=0, ] DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db replaced DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db DEBUG 2016-07-15 17:41:00,750 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-TOC.txt.tmp DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db done DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db replaced DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db DEBUG 2016-07-15 17:41:00,751 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-TOC.txt.tmp TRACE 2016-07-15 17:41:00,753 [shard 0] seastar - starting flush, id=193 TRACE 2016-07-15 17:41:00,753 seastar - running fdatasync() from 0 id=193 TRACE 2016-07-15 17:41:00,753 [shard 0] seastar - starting flush, id=194 DEBUG 2016-07-15 17:41:00,753 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-TOC.txt.tmp TRACE 2016-07-15 17:41:00,772 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,772 seastar - running fdatasync() from 0 id=194 TRACE 2016-07-15 17:41:00,772 [shard 0] seastar - flush done, id=193 TRACE 2016-07-15 17:41:00,786 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,786 [shard 0] seastar - flush done, id=194 TRACE 2016-07-15 17:41:00,786 [shard 0] seastar - starting flush, id=195 TRACE 2016-07-15 17:41:00,786 seastar - running fdatasync() from 0 id=195 TRACE 2016-07-15 17:41:00,786 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - flush done, id=195 TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - starting flush, id=196 TRACE 2016-07-15 17:41:00,787 seastar - running fdatasync() from 0 id=196 TRACE 2016-07-15 17:41:00,787 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - flush done, id=196 TRACE 2016-07-15 17:41:00,787 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: end of stream TRACE 2016-07-15 17:41:00,788 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: end of stream TRACE 2016-07-15 17:41:00,806 [shard 0] seastar - starting flush, id=197 TRACE 2016-07-15 17:41:00,809 seastar - running fdatasync() from 0 id=197 TRACE 2016-07-15 17:41:00,809 [shard 0] seastar - starting flush, id=198 TRACE 2016-07-15 17:41:00,809 [shard 0] seastar - starting flush, id=199 TRACE 2016-07-15 17:41:00,867 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,867 seastar - running fdatasync() from 0 id=198 TRACE 2016-07-15 17:41:00,867 [shard 0] seastar - flush done, id=197 TRACE 2016-07-15 17:41:00,873 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,873 seastar - running fdatasync() from 0 id=199 TRACE 2016-07-15 17:41:00,873 [shard 0] seastar - flush done, id=198 TRACE 2016-07-15 17:41:00,884 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - flush done, id=199 TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - starting flush, id=200 TRACE 2016-07-15 17:41:00,885 seastar - running fdatasync() from 0 id=200 TRACE 2016-07-15 17:41:00,885 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - flush done, id=200 TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: end of stream TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=201 TRACE 2016-07-15 17:41:00,887 seastar - running fdatasync() from 0 id=201 TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=202 TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=203 TRACE 2016-07-15 17:41:00,920 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,920 seastar - running fdatasync() from 0 id=202 TRACE 2016-07-15 17:41:00,920 [shard 0] seastar - flush done, id=201 TRACE 2016-07-15 17:41:00,947 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,947 seastar - running fdatasync() from 0 id=203 TRACE 2016-07-15 17:41:00,956 [shard 0] seastar - flush done, id=202 TRACE 2016-07-15 17:41:00,963 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,963 [shard 0] seastar - flush done, id=203 DEBUG 2016-07-15 17:41:00,963 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Digest.sha1 DEBUG 2016-07-15 17:41:00,964 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Digest.sha1 TRACE 2016-07-15 17:41:00,964 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: after consume_end_of_stream() TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=204 TRACE 2016-07-15 17:41:00,968 seastar - running fdatasync() from 0 id=204 TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=205 TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=206 TRACE 2016-07-15 17:41:00,986 seastar - fdatasync() done TRACE 2016-07-15 17:41:00,986 seastar - running fdatasync() from 0 id=205 TRACE 2016-07-15 17:41:00,986 [shard 0] seastar - flush done, id=204 TRACE 2016-07-15 17:41:01,002 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,002 seastar - running fdatasync() from 0 id=206 TRACE 2016-07-15 17:41:01,006 [shard 0] seastar - flush done, id=205 TRACE 2016-07-15 17:41:01,007 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,007 [shard 0] seastar - flush done, id=206 DEBUG 2016-07-15 17:41:01,007 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-CRC.db DEBUG 2016-07-15 17:41:01,008 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Digest.sha1 DEBUG 2016-07-15 17:41:01,008 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-CRC.db TRACE 2016-07-15 17:41:01,009 [shard 0] seastar - starting flush, id=207 TRACE 2016-07-15 17:41:01,016 [shard 0] seastar - starting flush, id=208 TRACE 2016-07-15 17:41:01,022 seastar - running fdatasync() from 0 id=207 TRACE 2016-07-15 17:41:01,022 [shard 0] seastar - starting flush, id=209 TRACE 2016-07-15 17:41:01,047 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,047 seastar - running fdatasync() from 0 id=208 TRACE 2016-07-15 17:41:01,056 [shard 0] seastar - flush done, id=207 TRACE 2016-07-15 17:41:01,062 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,062 seastar - running fdatasync() from 0 id=209 TRACE 2016-07-15 17:41:01,062 [shard 0] seastar - flush done, id=208 TRACE 2016-07-15 17:41:01,076 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,076 [shard 0] seastar - flush done, id=209 DEBUG 2016-07-15 17:41:01,076 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-CRC.db TRACE 2016-07-15 17:41:01,076 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:01,076 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Summary.db TRACE 2016-07-15 17:41:01,077 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:01,077 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Summary.db TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=210 TRACE 2016-07-15 17:41:01,086 seastar - running fdatasync() from 0 id=210 TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=211 TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=212 TRACE 2016-07-15 17:41:01,118 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,118 seastar - running fdatasync() from 0 id=211 TRACE 2016-07-15 17:41:01,118 [shard 0] seastar - flush done, id=210 TRACE 2016-07-15 17:41:01,125 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,125 seastar - running fdatasync() from 0 id=212 TRACE 2016-07-15 17:41:01,127 [shard 0] seastar - flush done, id=211 TRACE 2016-07-15 17:41:01,137 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,137 [shard 0] seastar - flush done, id=212 TRACE 2016-07-15 17:41:01,137 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: after finish_file_writer() DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Summary.db DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Filter.db DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Filter.db TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=213 TRACE 2016-07-15 17:41:01,143 seastar - running fdatasync() from 0 id=213 TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=214 TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=215 TRACE 2016-07-15 17:41:01,175 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,175 seastar - running fdatasync() from 0 id=214 TRACE 2016-07-15 17:41:01,175 [shard 0] seastar - flush done, id=213 TRACE 2016-07-15 17:41:01,184 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,184 seastar - running fdatasync() from 0 id=215 TRACE 2016-07-15 17:41:01,188 [shard 0] seastar - flush done, id=214 TRACE 2016-07-15 17:41:01,194 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,194 [shard 0] seastar - flush done, id=215 DEBUG 2016-07-15 17:41:01,194 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Filter.db DEBUG 2016-07-15 17:41:01,195 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Statistics.db DEBUG 2016-07-15 17:41:01,195 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Statistics.db TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=216 TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=217 TRACE 2016-07-15 17:41:01,201 seastar - running fdatasync() from 0 id=216 TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=218 TRACE 2016-07-15 17:41:01,235 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,235 seastar - running fdatasync() from 0 id=217 TRACE 2016-07-15 17:41:01,238 [shard 0] seastar - flush done, id=216 TRACE 2016-07-15 17:41:01,243 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,243 seastar - running fdatasync() from 0 id=218 TRACE 2016-07-15 17:41:01,243 [shard 0] seastar - flush done, id=217 TRACE 2016-07-15 17:41:01,255 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,255 [shard 0] seastar - flush done, id=218 DEBUG 2016-07-15 17:41:01,255 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Statistics.db TRACE 2016-07-15 17:41:01,256 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: sealing TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=219 TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=219 TRACE 2016-07-15 17:41:01,256 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,256 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: sealing TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - flush done, id=219 TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=220 TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=220 TRACE 2016-07-15 17:41:01,256 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - flush done, id=220 TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=221 TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=221 TRACE 2016-07-15 17:41:01,280 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,280 [shard 0] seastar - flush done, id=221 TRACE 2016-07-15 17:41:01,280 [shard 0] seastar - starting flush, id=222 TRACE 2016-07-15 17:41:01,281 seastar - running fdatasync() from 0 id=222 TRACE 2016-07-15 17:41:01,281 [shard 0] seastar - starting flush, id=223 TRACE 2016-07-15 17:41:01,293 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,294 seastar - running fdatasync() from 0 id=223 TRACE 2016-07-15 17:41:01,294 [shard 0] seastar - flush done, id=222 DEBUG 2016-07-15 17:41:01,294 [shard 0] sstable - SSTable with generation 84 of system.schema_usertypes was sealed successfully. TRACE 2016-07-15 17:41:01,294 [shard 0] database - Written. Opening the sstable... TRACE 2016-07-15 17:41:01,310 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - flush done, id=223 DEBUG 2016-07-15 17:41:01,310 [shard 0] sstable - SSTable with generation 95 of system.schema_columnfamilies was sealed successfully. TRACE 2016-07-15 17:41:01,310 [shard 0] database - Written. Opening the sstable... TRACE 2016-07-15 17:41:01,310 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: sealing TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - starting flush, id=224 TRACE 2016-07-15 17:41:01,310 seastar - running fdatasync() from 0 id=224 TRACE 2016-07-15 17:41:01,310 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - flush done, id=224 TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - starting flush, id=225 TRACE 2016-07-15 17:41:01,310 seastar - running fdatasync() from 0 id=225 TRACE 2016-07-15 17:41:01,324 [shard 0] query_processor - execute_internal: "INSERT INTO system.peers (peer, schema_version) VALUES (?, ?)" (127.0.0.3, 67d1e0b4-d995-38fa-9e92-075d046a09fe) TRACE 2016-07-15 17:41:01,324 [shard 0] database - apply {system.peers key {key: pk{00047f000003}, token:-4598924402677416620} data {mutation_partition: {tombstone: none} () static {row: } clustered {rows_entry: ckp{} {deletable_row: {row_marker 1468597261324000 0 0} {tombstone: none} {row: {column: 6 01000537ae72144e e067d1e0b4d99538fa9e92075d046a09fe}}}}}} DEBUG 2016-07-15 17:41:01,325 [shard 0] migration_manager - Submitting migration task for 127.0.0.3 TRACE 2016-07-15 17:41:01,327 seastar - fdatasync() done TRACE 2016-07-15 17:41:01,327 [shard 0] seastar - flush done, id=225 DEBUG 2016-07-15 17:41:01,327 [shard 0] sstable - SSTable with generation 85 of system.schema_triggers was sealed successfully. DEBUG 2016-07-15 17:41:01,327 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db done INFO 2016-07-15 17:41:01,327 [shard 0] compaction - Compacting [/home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-81-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema _usertypes-ka-82-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-83-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db:level= 0, ] DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db replaced DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db done DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db replaced TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Reading new schema TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging keyspaces INFO 2016-07-15 17:41:01,328 [shard 0] schema_tables - Dropping keyspace testxyz TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging tables TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging types TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Dropping keyspaces TRACE 2016-07-15 17:41:01,329 [shard 0] schema_tables - Schema merged ```
non_main
schema change statements are slow due to memtable flush latency installation details scylla version or git commit hash any executing ddl statements takes significantly more time on scylla than on cassandra for instance drop keyspace takes about a second on an idle s server i traced that down to latency of the flush of schema tables the create keyspace statement is noticeably faster than drop keyspace because it flushes much fewer tables it looks like the latency comes mainly from a large number of fdatasync calls which we execute sequentially during schema tables flush i counted calls when i disable them drop keyspace time drops down to about maybe some of them could be avoided or parallelized here s a detailed trace during drop keyspace trace schema tables taking the merge lock trace schema tables took the merge lock trace schema tables reading old schema trace schema tables applying schema changes trace database apply system schema keyspaces key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system schema columnfamilies key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system schema columns key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system schema triggers key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system schema usertypes key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system indexinfo key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace schema tables flushing debug database sealing active memtable of indexinfo system partitions occupancy debug database flushing to home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db debug sstable writing toc file home tgrabiec ccm scylla data system indexinfo system indexinfo ka toc txt tmp debug database sealing active memtable of schema keyspaces system partitions occupancy debug database sealing active memtable of schema triggers system partitions occupancy debug database sealing active memtable of schema columns system partitions occupancy debug database sealing active memtable of schema usertypes system partitions occupancy debug database sealing active memtable of schema columnfamilies system partitions occupancy debug database flushing to home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka toc txt tmp trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db end of stream trace sstable home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db after consume end of stream trace sstable home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db after consume end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing digest file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka digest debug sstable writing digest file home tgrabiec ccm scylla data system indexinfo system indexinfo ka digest trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing crc file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka crc db debug sstable writing crc file home tgrabiec ccm scylla data system indexinfo system indexinfo ka crc db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka summary db trace sstable home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system indexinfo system indexinfo ka summary db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing filter db file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka filter db debug sstable writing filter db file home tgrabiec ccm scylla data system indexinfo system indexinfo ka filter db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing statistics db file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka statistics db debug sstable writing statistics db file home tgrabiec ccm scylla data system indexinfo system indexinfo ka statistics db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db sealing trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace sstable home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db sealing trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id debug sstable sstable with generation of system schema keyspaces was sealed successfully trace database written opening the sstable trace seastar fdatasync done trace seastar flush done id debug sstable sstable with generation of system indexinfo was sealed successfully trace database written opening the sstable debug database flushing to home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db done debug database memtable for home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema triggers system schema triggers ka toc txt tmp debug database flushing to home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db done debug database memtable for home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema columns system schema columns ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema columns system schema columns ka toc txt tmp trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db end of stream trace sstable home tgrabiec ccm scylla data system schema columns system schema columns ka data db end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db after consume end of stream trace sstable home tgrabiec ccm scylla data system schema columns system schema columns ka data db after consume end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing digest file home tgrabiec ccm scylla data system schema triggers system schema triggers ka digest debug sstable writing digest file home tgrabiec ccm scylla data system schema columns system schema columns ka digest trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing crc file home tgrabiec ccm scylla data system schema triggers system schema triggers ka crc db debug sstable writing crc file home tgrabiec ccm scylla data system schema columns system schema columns ka crc db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka summary db trace sstable home tgrabiec ccm scylla data system schema columns system schema columns ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema columns system schema columns ka summary db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing filter db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka filter db debug sstable writing filter db file home tgrabiec ccm scylla data system schema columns system schema columns ka filter db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing statistics db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka statistics db debug sstable writing statistics db file home tgrabiec ccm scylla data system schema columns system schema columns ka statistics db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db sealing trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace sstable home tgrabiec ccm scylla data system schema columns system schema columns ka data db sealing trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id debug sstable sstable with generation of system schema triggers was sealed successfully trace database written opening the sstable debug sstable sstable with generation of system schema columns was sealed successfully trace database written opening the sstable debug database flushing to home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db done info compaction compacting home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db level home tgrabiec ccm scylla data system schema triggers system schema tr iggers ka data db level home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db level home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db level debug database memtable for home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka toc txt tmp debug database flushing to home tgrabiec ccm scylla data system schema columns system schema columns ka data db done debug database memtable for home tgrabiec ccm scylla data system schema columns system schema columns ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka toc txt tmp trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id debug sstable writing toc file home tgrabiec ccm scylla data system schema triggers system schema triggers ka toc txt tmp trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db end of stream trace sstable home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db after consume end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace sstable home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db after consume end of stream trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing digest file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka digest debug sstable writing digest file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka digest trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db after consume end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing crc file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka crc db debug sstable writing digest file home tgrabiec ccm scylla data system schema triggers system schema triggers ka digest debug sstable writing crc file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka crc db trace seastar starting flush id trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing crc file home tgrabiec ccm scylla data system schema triggers system schema triggers ka crc db trace sstable home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka summary db trace sstable home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka summary db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka summary db debug sstable writing filter db file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka filter db debug sstable writing filter db file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka filter db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing filter db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka filter db debug sstable writing statistics db file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka statistics db debug sstable writing statistics db file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka statistics db trace seastar starting flush id trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing statistics db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka statistics db trace sstable home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db sealing trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace sstable home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db sealing trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id debug sstable sstable with generation of system schema usertypes was sealed successfully trace database written opening the sstable trace seastar fdatasync done trace seastar flush done id debug sstable sstable with generation of system schema columnfamilies was sealed successfully trace database written opening the sstable trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db sealing trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace query processor execute internal insert into system peers peer schema version values trace database apply system peers key key pk token data mutation partition tombstone none static row clustered rows entry ckp deletable row row marker tombstone none row column debug migration manager submitting migration task for trace seastar fdatasync done trace seastar flush done id debug sstable sstable with generation of system schema triggers was sealed successfully debug database flushing to home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db done info compaction compacting home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db level home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db level home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db level home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db level debug database memtable for home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db done debug database memtable for home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db replaced trace schema tables reading new schema trace schema tables merging keyspaces info schema tables dropping keyspace testxyz trace schema tables merging tables trace schema tables merging types trace schema tables dropping keyspaces trace schema tables schema merged
0
402,904
27,393,065,874
IssuesEvent
2023-02-28 17:32:27
DHBW-FN/OxideWM
https://api.github.com/repos/DHBW-FN/OxideWM
closed
write the readthedocs
Research Documentation
Working through the [tutorials](https://docs.readthedocs.io/en/stable/tutorial/https://oxide.readthedocs.io/en/latest/) of readthedocs to get to know all options for proper documentation. [OxideWM-readthedocs](https://oxide.readthedocs.io/en/latest/)
1.0
write the readthedocs - Working through the [tutorials](https://docs.readthedocs.io/en/stable/tutorial/https://oxide.readthedocs.io/en/latest/) of readthedocs to get to know all options for proper documentation. [OxideWM-readthedocs](https://oxide.readthedocs.io/en/latest/)
non_main
write the readthedocs working through the of readthedocs to get to know all options for proper documentation
0
477,897
13,769,654,679
IssuesEvent
2020-10-07 18:57:51
level73/membernet
https://api.github.com/repos/level73/membernet
closed
NES/CBI Reports front-end not displaying all back-end fields
Platform: Membernet Priority: Critical Project: M&E Type: Bug
under changes in polices (2.1)and practices (2.2), the first section where it asks for a description, is not displaying in the front end. We need to show all back end fields on the front end, as the sections have multiple boxes. If you need an example check report number 123
1.0
NES/CBI Reports front-end not displaying all back-end fields - under changes in polices (2.1)and practices (2.2), the first section where it asks for a description, is not displaying in the front end. We need to show all back end fields on the front end, as the sections have multiple boxes. If you need an example check report number 123
non_main
nes cbi reports front end not displaying all back end fields under changes in polices and practices the first section where it asks for a description is not displaying in the front end we need to show all back end fields on the front end as the sections have multiple boxes if you need an example check report number
0
1,143
2,698,792,789
IssuesEvent
2015-04-03 11:07:20
neovim/neovim
https://api.github.com/repos/neovim/neovim
closed
travis: test on OSX too
buildsystem
Travis is getting awesomer and awesomer. We once again see the possibility of augmenting our test capability: travis seems to support OSX now: **NOTE:** I haven't been able to find if it's now possible to specify all that is necessary in one `.travis.yml` file. This used to be impossible in earlier betas. Information seems scarce. - https://github.com/travis-ci/travis-ci/issues/216 (people usually link to this when they have a PR for their project, so we can look at examples there) - http://docs.travis-ci.com/user/osx-ci-environment/ - https://github.com/citra-emu/citra/pull/7 (this seems like a good example to follow) It seems one needs the `os` key: ```yaml os: - osx - linux ``` I don't see a need of building all our current lines on OSX though, just one should be enough. Ideally there would also come a FreeBSD build with time. Travis [doesn't support FreeBSD at the moment, though](https://github.com/travis-ci/travis-ci/issues/1818). <bountysource-plugin> --- Want to back this issue? **[Place a bounty on it!](https://www.bountysource.com/issues/2345752-travis-test-on-osx-too?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github). </bountysource-plugin>
1.0
travis: test on OSX too - Travis is getting awesomer and awesomer. We once again see the possibility of augmenting our test capability: travis seems to support OSX now: **NOTE:** I haven't been able to find if it's now possible to specify all that is necessary in one `.travis.yml` file. This used to be impossible in earlier betas. Information seems scarce. - https://github.com/travis-ci/travis-ci/issues/216 (people usually link to this when they have a PR for their project, so we can look at examples there) - http://docs.travis-ci.com/user/osx-ci-environment/ - https://github.com/citra-emu/citra/pull/7 (this seems like a good example to follow) It seems one needs the `os` key: ```yaml os: - osx - linux ``` I don't see a need of building all our current lines on OSX though, just one should be enough. Ideally there would also come a FreeBSD build with time. Travis [doesn't support FreeBSD at the moment, though](https://github.com/travis-ci/travis-ci/issues/1818). <bountysource-plugin> --- Want to back this issue? **[Place a bounty on it!](https://www.bountysource.com/issues/2345752-travis-test-on-osx-too?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F461131&utm_medium=issues&utm_source=github). </bountysource-plugin>
non_main
travis test on osx too travis is getting awesomer and awesomer we once again see the possibility of augmenting our test capability travis seems to support osx now note i haven t been able to find if it s now possible to specify all that is necessary in one travis yml file this used to be impossible in earlier betas information seems scarce people usually link to this when they have a pr for their project so we can look at examples there this seems like a good example to follow it seems one needs the os key yaml os osx linux i don t see a need of building all our current lines on osx though just one should be enough ideally there would also come a freebsd build with time travis want to back this issue we accept bounties via
0
60,754
3,133,852,458
IssuesEvent
2015-09-10 06:03:59
google/paco
https://api.github.com/repos/google/paco
opened
Web ui: Download CSV has no header row and several columns are populated with undefined
Component-Server Component-UI Priority-High
It looks like empty cells are generating with the value "undefined". <img width="913" alt="screen shot 2015-09-09 at 11 00 08 pm" src="https://cloud.githubusercontent.com/assets/1422459/9781254/00cccbc8-5747-11e5-9f01-54f58ee52937.png">
1.0
Web ui: Download CSV has no header row and several columns are populated with undefined - It looks like empty cells are generating with the value "undefined". <img width="913" alt="screen shot 2015-09-09 at 11 00 08 pm" src="https://cloud.githubusercontent.com/assets/1422459/9781254/00cccbc8-5747-11e5-9f01-54f58ee52937.png">
non_main
web ui download csv has no header row and several columns are populated with undefined it looks like empty cells are generating with the value undefined img width alt screen shot at pm src
0
50,679
13,551,410,925
IssuesEvent
2020-09-17 11:03:47
loftwah/trouble.gg
https://api.github.com/repos/loftwah/trouble.gg
closed
WS-2019-0424 (Medium) detected in elliptic-6.5.2.tgz - autoclosed
security vulnerability
## WS-2019-0424 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary> <p>EC cryptography</p> <p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/trouble.gg/wp-content/themes/twentytwenty/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/trouble.gg/wp-content/themes/twentytwenty/node_modules/elliptic/package.json</p> <p> Dependency Hierarchy: - scripts-5.1.0.tgz (Root Library) - webpack-4.43.0.tgz - node-libs-browser-2.2.1.tgz - crypto-browserify-3.12.0.tgz - browserify-sign-4.2.0.tgz - :x: **elliptic-6.5.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/loftwah/trouble.gg/commit/41cee91bec8d7f7b602f0068a0d8f245894224b8">41cee91bec8d7f7b602f0068a0d8f245894224b8</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> all versions of elliptic are vulnerable to Timing Attack through side-channels. <p>Publish Date: 2019-11-13 <p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0424</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Adjacent - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0424 (Medium) detected in elliptic-6.5.2.tgz - autoclosed - ## WS-2019-0424 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary> <p>EC cryptography</p> <p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/trouble.gg/wp-content/themes/twentytwenty/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/trouble.gg/wp-content/themes/twentytwenty/node_modules/elliptic/package.json</p> <p> Dependency Hierarchy: - scripts-5.1.0.tgz (Root Library) - webpack-4.43.0.tgz - node-libs-browser-2.2.1.tgz - crypto-browserify-3.12.0.tgz - browserify-sign-4.2.0.tgz - :x: **elliptic-6.5.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/loftwah/trouble.gg/commit/41cee91bec8d7f7b602f0068a0d8f245894224b8">41cee91bec8d7f7b602f0068a0d8f245894224b8</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> all versions of elliptic are vulnerable to Timing Attack through side-channels. <p>Publish Date: 2019-11-13 <p>URL: <a href=https://github.com/indutny/elliptic/commit/ec735edde187a43693197f6fa3667ceade751a3a>WS-2019-0424</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Adjacent - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
ws medium detected in elliptic tgz autoclosed ws medium severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file tmp ws scm trouble gg wp content themes twentytwenty package json path to vulnerable library tmp ws scm trouble gg wp content themes twentytwenty node modules elliptic package json dependency hierarchy scripts tgz root library webpack tgz node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library found in head commit a href vulnerability details all versions of elliptic are vulnerable to timing attack through side channels publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact high availability impact none for more information on scores click a href step up your open source security game with whitesource
0
626,198
19,803,521,772
IssuesEvent
2022-01-19 02:15:28
vincetiu8/zombie-game
https://api.github.com/repos/vincetiu8/zombie-game
closed
Armor and protective gear
area/weapons priority/low type/question
Many survival games like Minecraft, and well Minecraft, have an armor system, where you can craft (or in this case buy with gold) better armor so you can take less damage.
1.0
Armor and protective gear - Many survival games like Minecraft, and well Minecraft, have an armor system, where you can craft (or in this case buy with gold) better armor so you can take less damage.
non_main
armor and protective gear many survival games like minecraft and well minecraft have an armor system where you can craft or in this case buy with gold better armor so you can take less damage
0
620
4,116,972,256
IssuesEvent
2016-06-08 04:24:06
Particular/ServiceControl
https://api.github.com/repos/Particular/ServiceControl
closed
SCMU Add license screen gives no feedback when updating a license
Tag: Installer Tag: Maintainer Prio Type: Refactoring
When you update a perpetual license to one that has a new maintenance date there is no visual queue that the update was successful. //cc @sergioc as discussed
True
SCMU Add license screen gives no feedback when updating a license - When you update a perpetual license to one that has a new maintenance date there is no visual queue that the update was successful. //cc @sergioc as discussed
main
scmu add license screen gives no feedback when updating a license when you update a perpetual license to one that has a new maintenance date there is no visual queue that the update was successful cc sergioc as discussed
1
188,754
6,781,910,035
IssuesEvent
2017-10-30 04:35:09
HelpyTeam/HelpyService
https://api.github.com/repos/HelpyTeam/HelpyService
closed
Implement view requests list for staff page
DONE priority/1 Staff Management
# Overview Implement view requests list for staff page # Target - [x] Implement view pending requests list for staff page - [x] Implement view closed requests list for staff page
1.0
Implement view requests list for staff page - # Overview Implement view requests list for staff page # Target - [x] Implement view pending requests list for staff page - [x] Implement view closed requests list for staff page
non_main
implement view requests list for staff page overview implement view requests list for staff page target implement view pending requests list for staff page implement view closed requests list for staff page
0
54,749
30,344,886,608
IssuesEvent
2023-07-11 14:50:21
Howdju/howdju
https://api.github.com/repos/Howdju/howdju
opened
Add SourceEditorField's DialogContainer to DOM once.
performance
We would probably need to: - Move the DialogContainer to another component (`SourceDescriptionHelpDialog` that is imported into the App. - User a Provider to update that dialog's visibility from within the SourceEditorFields callback showSourceDescriptionHelpDialog.
True
Add SourceEditorField's DialogContainer to DOM once. - We would probably need to: - Move the DialogContainer to another component (`SourceDescriptionHelpDialog` that is imported into the App. - User a Provider to update that dialog's visibility from within the SourceEditorFields callback showSourceDescriptionHelpDialog.
non_main
add sourceeditorfield s dialogcontainer to dom once we would probably need to move the dialogcontainer to another component sourcedescriptionhelpdialog that is imported into the app user a provider to update that dialog s visibility from within the sourceeditorfields callback showsourcedescriptionhelpdialog
0
85,184
7,963,452,084
IssuesEvent
2018-07-13 17:38:38
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: timeout failed on <unknown branch>
C-test-failure O-robot
SHA: https://github.com/cockroachdb/cockroach/commits/benesch-test Parameters: Failed test: https://viewLog.html?buildId=benesch-test&tab=buildLog ``` test.go:770: test timed out (10ms) ```
1.0
roachtest: timeout failed on <unknown branch> - SHA: https://github.com/cockroachdb/cockroach/commits/benesch-test Parameters: Failed test: https://viewLog.html?buildId=benesch-test&tab=buildLog ``` test.go:770: test timed out (10ms) ```
non_main
roachtest timeout failed on sha parameters failed test test go test timed out
0
89,971
25,939,116,341
IssuesEvent
2022-12-16 16:38:03
TrueBlocks/trueblocks-docker
https://api.github.com/repos/TrueBlocks/trueblocks-docker
closed
Choose consistent convention for tagging releases
enhancement TB-build
In the docker versions, we use `0.40.0-beta` for version tagging. In the core repo, we use `v0.40.0-beta` for tagging. I prefer `v0.40.0-beta` format (with the `v`), but it seems counter to the way docker does it. Choices: 1) leave them different 2) switch to `0.40.0-beta` for all repos 3) switch to `v0.40.0-beta` for all repos
1.0
Choose consistent convention for tagging releases - In the docker versions, we use `0.40.0-beta` for version tagging. In the core repo, we use `v0.40.0-beta` for tagging. I prefer `v0.40.0-beta` format (with the `v`), but it seems counter to the way docker does it. Choices: 1) leave them different 2) switch to `0.40.0-beta` for all repos 3) switch to `v0.40.0-beta` for all repos
non_main
choose consistent convention for tagging releases in the docker versions we use beta for version tagging in the core repo we use beta for tagging i prefer beta format with the v but it seems counter to the way docker does it choices leave them different switch to beta for all repos switch to beta for all repos
0
74,130
24,960,643,187
IssuesEvent
2022-11-01 15:15:56
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
CSP: Inline event handlers are not triggered
:lady_beetle: defect
### Describe the bug With CSP enabled. In same request, components that are updated and their inline event handlers triggered via `PrimeFaces.current( ).executeScript("$('selector').click()")` will not be able to be executed. In the eval section of the partial response, scripts added for execution in invoke application phase will come before scripts added by CSP in render response phase, but I expect that CSP generated script to come before, not after. This sequence: `update(events removed) -> trigger events-> events added` is wrongly generated. The same list is used by [executeScript](https://github.com/primefaces/primefaces/blob/2b1384a3c6464c995e029757eeebd8cb6144f4d9/primefaces/src/main/java/org/primefaces/PrimeFaces.java#L133) and by [writeJavascriptHandlers](https://github.com/primefaces/primefaces/blob/2b1384a3c6464c995e029757eeebd8cb6144f4d9/primefaces/src/main/java/org/primefaces/csp/CspPartialResponseWriter.java#L199). ### Reproducer ``` <p:commandLink id="cl" value="link" onclick="alert()"> <p:commandButton value="press me" action="#{mb.press}" update="cl" /> ``` ``` public void press( ) { PrimeFaces.current( ).executeScript( "$('#cl').click()" ); } ``` ### Expected behavior Re-add events in the update section or reorder the eval section so as the CSP generated scripts came first. To have events attached before triggering them. ### PrimeFaces edition _No response_ ### PrimeFaces version 11.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.2.19 ### Java version 1.8 ### Browser(s) _No response_
1.0
CSP: Inline event handlers are not triggered - ### Describe the bug With CSP enabled. In same request, components that are updated and their inline event handlers triggered via `PrimeFaces.current( ).executeScript("$('selector').click()")` will not be able to be executed. In the eval section of the partial response, scripts added for execution in invoke application phase will come before scripts added by CSP in render response phase, but I expect that CSP generated script to come before, not after. This sequence: `update(events removed) -> trigger events-> events added` is wrongly generated. The same list is used by [executeScript](https://github.com/primefaces/primefaces/blob/2b1384a3c6464c995e029757eeebd8cb6144f4d9/primefaces/src/main/java/org/primefaces/PrimeFaces.java#L133) and by [writeJavascriptHandlers](https://github.com/primefaces/primefaces/blob/2b1384a3c6464c995e029757eeebd8cb6144f4d9/primefaces/src/main/java/org/primefaces/csp/CspPartialResponseWriter.java#L199). ### Reproducer ``` <p:commandLink id="cl" value="link" onclick="alert()"> <p:commandButton value="press me" action="#{mb.press}" update="cl" /> ``` ``` public void press( ) { PrimeFaces.current( ).executeScript( "$('#cl').click()" ); } ``` ### Expected behavior Re-add events in the update section or reorder the eval section so as the CSP generated scripts came first. To have events attached before triggering them. ### PrimeFaces edition _No response_ ### PrimeFaces version 11.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.2.19 ### Java version 1.8 ### Browser(s) _No response_
non_main
csp inline event handlers are not triggered describe the bug with csp enabled in same request components that are updated and their inline event handlers triggered via primefaces current executescript selector click will not be able to be executed in the eval section of the partial response scripts added for execution in invoke application phase will come before scripts added by csp in render response phase but i expect that csp generated script to come before not after this sequence update events removed trigger events events added is wrongly generated the same list is used by and by reproducer public void press primefaces current executescript cl click expected behavior re add events in the update section or reorder the eval section so as the csp generated scripts came first to have events attached before triggering them primefaces edition no response primefaces version theme no response jsf implementation mojarra jsf version java version browser s no response
0
5,636
28,361,692,335
IssuesEvent
2023-04-12 11:13:12
beyarkay/eskom-calendar
https://api.github.com/repos/beyarkay/eskom-calendar
closed
Missing area schedule
waiting-on-maintainer missing-area-schedule
**What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?** Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot. WESTVILLE, ETHEKWINI. As per Eskomsepush **Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?** This really helps us figure out what's working! **Any other information** If you've got any other info you think might be helpful, feel free to leave it here
True
Missing area schedule - **What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?** Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot. WESTVILLE, ETHEKWINI. As per Eskomsepush **Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?** This really helps us figure out what's working! **Any other information** If you've got any other info you think might be helpful, feel free to leave it here
main
missing area schedule what area s couldn t you find on please also give the province municipality our beautiful country has a surprising number of places that are named the same as each other if you know what your area is named on eskomsepush including that also helps a lot westville ethekwini as per eskomsepush where did you hear about this really helps us figure out what s working any other information if you ve got any other info you think might be helpful feel free to leave it here
1
4,386
22,317,417,675
IssuesEvent
2022-06-14 00:26:43
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
[a11y]: Radio buttons not working for JAWS screenreader as expected
type: a11y ♿ component: button browser: edge status: needs triage 🕵️‍♀️ status: waiting for maintainer response 💬 screen-reader: JAWS
### Package carbon-components ### Browser Edge ### Operating System Windows ### Package version 10.56.0 ### React version 16.13.1 ### Automated testing tool and ruleset manual test ### Assistive technology JAWS 2020 ### Description Focus goes to the Carbon radio buttons using JAWS. But when user uses UP/DOWN arrow keys it reads Radio button with label and state and when the user uses UP/DOWN arrow key again the focus navigates to the span text and JAWS reads the label. This will confuse screenreader users and also not the expected behaviour of JAWS and radio buttons. same issue is reproducible on [Carbon-design radio button](https://www.carbondesignsystem.com/components/radio-button/usage#live-demo) ### WCAG 2.1 Violation _No response_ ### CodeSandbox example https://codesandbox.io/s/ecstatic-drake-3kcv4e ### Steps to reproduce 1. Using the latest Edge browser and JAWS 2020 2. Navigate to the radio buttons and then just press the down arrow key 3. ISSUE: JAWS reads out the radio label at the input checkbox and then on next down arrow keypress, it reads out the text aswell. ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md) - [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems
True
[a11y]: Radio buttons not working for JAWS screenreader as expected - ### Package carbon-components ### Browser Edge ### Operating System Windows ### Package version 10.56.0 ### React version 16.13.1 ### Automated testing tool and ruleset manual test ### Assistive technology JAWS 2020 ### Description Focus goes to the Carbon radio buttons using JAWS. But when user uses UP/DOWN arrow keys it reads Radio button with label and state and when the user uses UP/DOWN arrow key again the focus navigates to the span text and JAWS reads the label. This will confuse screenreader users and also not the expected behaviour of JAWS and radio buttons. same issue is reproducible on [Carbon-design radio button](https://www.carbondesignsystem.com/components/radio-button/usage#live-demo) ### WCAG 2.1 Violation _No response_ ### CodeSandbox example https://codesandbox.io/s/ecstatic-drake-3kcv4e ### Steps to reproduce 1. Using the latest Edge browser and JAWS 2020 2. Navigate to the radio buttons and then just press the down arrow key 3. ISSUE: JAWS reads out the radio label at the input checkbox and then on next down arrow keypress, it reads out the text aswell. ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md) - [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems
main
radio buttons not working for jaws screenreader as expected package carbon components browser edge operating system windows package version react version automated testing tool and ruleset manual test assistive technology jaws description focus goes to the carbon radio buttons using jaws but when user uses up down arrow keys it reads radio button with label and state and when the user uses up down arrow key again the focus navigates to the span text and jaws reads the label this will confuse screenreader users and also not the expected behaviour of jaws and radio buttons same issue is reproducible on wcag violation no response codesandbox example steps to reproduce using the latest edge browser and jaws navigate to the radio buttons and then just press the down arrow key issue jaws reads out the radio label at the input checkbox and then on next down arrow keypress it reads out the text aswell code of conduct i agree to follow this project s i checked the for duplicate problems
1
1,858
6,577,407,751
IssuesEvent
2017-09-12 00:42:08
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
mount using nfs4 and state=mounted triggers error on already mounted directory
affects_2.1 bug_report waiting_on_maintainer
I didn't find exactly this problem reported, only other bug reports related to nfs mounts. ##### ISSUE TYPE - Bug report ##### COMPONENT NAME mount module ##### ANSIBLE VERSION ``` ansible 2.1.0 config file = /home/alex/repos/infra/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Standard. ##### OS / ENVIRONMENT CentOS 7 -> CentOS 7 Ubunt 15.10/16.04 -> CentOS 7 CentOS 7 -> Ubuntu 15.10/16.04 In this particular case, only CentOS 7 -> CentOS 7 ##### SUMMARY NFS mount point with state=mounted triggers error if already mounted. ##### STEPS TO REPRODUCE Put a NFS mount stanza in a playbook and run the playbook when the mount is already mounted: ``` - name: configure fstab for alpha action: mount name=/srv/foo/alpha src=fileserver:/mdarchive/alpha fstype=nfs4 opts=rw,hard,tcp,intr,nolock,rsize=1048576,wsize=1048576,_netdev state=mounted ``` Run the playbook, get an error: ``` TASK [configure fstab for alpha] *********************************************** fatal: [cluster-node01]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /srv/foo/alpha: mount.nfs4: /srv/foo/alpha is busy or already mounted\n"} ``` ##### EXPECTED RESULTS I expect the documented behaviour: http://docs.ansible.com/ansible/mount_module.html `If mounted or unmounted, the device will be actively mounted or unmounted as needed and appropriately configured in fstab.` Obviously, if a mount is already mounted, mounting it again is _not_ needed and triggers the error and further execution of the playbook. ##### ACTUAL RESULTS ``` TASK [configure fstab for alpha] *********************************************** fatal: [cluster-node01]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /srv/foo/alpha: mount.nfs4: /srv/foo/alpha is busy or already mounted\n"} ```
True
mount using nfs4 and state=mounted triggers error on already mounted directory - I didn't find exactly this problem reported, only other bug reports related to nfs mounts. ##### ISSUE TYPE - Bug report ##### COMPONENT NAME mount module ##### ANSIBLE VERSION ``` ansible 2.1.0 config file = /home/alex/repos/infra/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Standard. ##### OS / ENVIRONMENT CentOS 7 -> CentOS 7 Ubunt 15.10/16.04 -> CentOS 7 CentOS 7 -> Ubuntu 15.10/16.04 In this particular case, only CentOS 7 -> CentOS 7 ##### SUMMARY NFS mount point with state=mounted triggers error if already mounted. ##### STEPS TO REPRODUCE Put a NFS mount stanza in a playbook and run the playbook when the mount is already mounted: ``` - name: configure fstab for alpha action: mount name=/srv/foo/alpha src=fileserver:/mdarchive/alpha fstype=nfs4 opts=rw,hard,tcp,intr,nolock,rsize=1048576,wsize=1048576,_netdev state=mounted ``` Run the playbook, get an error: ``` TASK [configure fstab for alpha] *********************************************** fatal: [cluster-node01]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /srv/foo/alpha: mount.nfs4: /srv/foo/alpha is busy or already mounted\n"} ``` ##### EXPECTED RESULTS I expect the documented behaviour: http://docs.ansible.com/ansible/mount_module.html `If mounted or unmounted, the device will be actively mounted or unmounted as needed and appropriately configured in fstab.` Obviously, if a mount is already mounted, mounting it again is _not_ needed and triggers the error and further execution of the playbook. ##### ACTUAL RESULTS ``` TASK [configure fstab for alpha] *********************************************** fatal: [cluster-node01]: FAILED! => {"changed": false, "failed": true, "msg": "Error mounting /srv/foo/alpha: mount.nfs4: /srv/foo/alpha is busy or already mounted\n"} ```
main
mount using and state mounted triggers error on already mounted directory i didn t find exactly this problem reported only other bug reports related to nfs mounts issue type bug report component name mount module ansible version ansible config file home alex repos infra ansible ansible cfg configured module search path default w o overrides configuration standard os environment centos centos ubunt centos centos ubuntu in this particular case only centos centos summary nfs mount point with state mounted triggers error if already mounted steps to reproduce put a nfs mount stanza in a playbook and run the playbook when the mount is already mounted name configure fstab for alpha action mount name srv foo alpha src fileserver mdarchive alpha fstype opts rw hard tcp intr nolock rsize wsize netdev state mounted run the playbook get an error task fatal failed changed false failed true msg error mounting srv foo alpha mount srv foo alpha is busy or already mounted n expected results i expect the documented behaviour if mounted or unmounted the device will be actively mounted or unmounted as needed and appropriately configured in fstab obviously if a mount is already mounted mounting it again is not needed and triggers the error and further execution of the playbook actual results task fatal failed changed false failed true msg error mounting srv foo alpha mount srv foo alpha is busy or already mounted n
1
5,329
26,904,673,137
IssuesEvent
2023-02-06 18:05:20
aws/serverless-application-model
https://api.github.com/repos/aws/serverless-application-model
closed
Handle ParameterName value with or without '/' when creaing generating SSMParameterReadPolicy policy
type/bug area/policy-templates stage/waiting-for-release maintainer/need-response
**Description:** When using `SSMParameterReadPolicy` policy template with `ParameterName` !Ref..ed to SSM resource `SSMParameterReadPolicy` will generate incorrect arn mapping. For example: ```yml MyCredParameter: Type: AWS::SSM::Parameter Properties: Name: /prefix/environment/applications/mycred Type: String Value: !Sub | { "accessKeyId": "<Your accessKeyId>", "accessKeySecret": "<Your accessKeySecret>" } MyFunction: Type: AWS::Serverless::Function Properties: Handler: file-name.js Runtime: nodejs8.10 CodeUri: dir Policies: - SSMParameterReadPolicy: ParameterName: !Ref MyCredParameter Events: SomeEvent: Type: SNS Properties: Topic: !Ref MyTopic Environment: Variables: MY_CRED: !Ref MyCredParameter ``` Here I am creating SSM resource and lambda function that will use this SSM parameter, as well as I am using SAM policy templates to allow `MyFunction` to allow read access to `MyCredParameter`. This will deploy successfully, However, when Lambda tries to get value from SSM params with name process.env.MY_CRED, it will fail. After some debugging I found out that, My lambda function role had permission to call `arn:aws:ssm:my-region:000000000:parameter//prefix/environment/applications/mycred` SSM parameter instead of `arn:aws:ssm:my-region:000000000:parameter/prefix/environment/applications/mycred`. Notice that extra '/' after parameter in first arn. **Observed result:** This was because SAM policy templates uses `ParameterName` as is to create arn without doing any sort of validation. ```json { "SSMParameterReadPolicy": { "Description": "Gives access to a parameter to load secrets in this account. If not using default key, KMSDecryptPolicy will also be needed.", "Parameters": { "ParameterName": { "Description":"The name of the secret stored in SSM in your account." } }, "Definition": { "Statement": [ { "Effect": "Allow", "Action": [ "ssm:DescribeParameters" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ssm:GetParameters", "ssm:GetParameter", "ssm:GetParametersByPath" ], "Resource": { "Fn::Sub": [ "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${parameterName}", { "parameterName": { "Ref": "ParameterName" } } ] } } ] } }, } ``` **Expected result:** It would be nice to have this `ParameterName` validated and add or remove `/` based on validation. Or accept `ParameterName` with `/` and do not add `/` while creating policy, since all SSM parameter name must start with `/` to be a valid SSM Name.
True
Handle ParameterName value with or without '/' when creaing generating SSMParameterReadPolicy policy - **Description:** When using `SSMParameterReadPolicy` policy template with `ParameterName` !Ref..ed to SSM resource `SSMParameterReadPolicy` will generate incorrect arn mapping. For example: ```yml MyCredParameter: Type: AWS::SSM::Parameter Properties: Name: /prefix/environment/applications/mycred Type: String Value: !Sub | { "accessKeyId": "<Your accessKeyId>", "accessKeySecret": "<Your accessKeySecret>" } MyFunction: Type: AWS::Serverless::Function Properties: Handler: file-name.js Runtime: nodejs8.10 CodeUri: dir Policies: - SSMParameterReadPolicy: ParameterName: !Ref MyCredParameter Events: SomeEvent: Type: SNS Properties: Topic: !Ref MyTopic Environment: Variables: MY_CRED: !Ref MyCredParameter ``` Here I am creating SSM resource and lambda function that will use this SSM parameter, as well as I am using SAM policy templates to allow `MyFunction` to allow read access to `MyCredParameter`. This will deploy successfully, However, when Lambda tries to get value from SSM params with name process.env.MY_CRED, it will fail. After some debugging I found out that, My lambda function role had permission to call `arn:aws:ssm:my-region:000000000:parameter//prefix/environment/applications/mycred` SSM parameter instead of `arn:aws:ssm:my-region:000000000:parameter/prefix/environment/applications/mycred`. Notice that extra '/' after parameter in first arn. **Observed result:** This was because SAM policy templates uses `ParameterName` as is to create arn without doing any sort of validation. ```json { "SSMParameterReadPolicy": { "Description": "Gives access to a parameter to load secrets in this account. If not using default key, KMSDecryptPolicy will also be needed.", "Parameters": { "ParameterName": { "Description":"The name of the secret stored in SSM in your account." } }, "Definition": { "Statement": [ { "Effect": "Allow", "Action": [ "ssm:DescribeParameters" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ssm:GetParameters", "ssm:GetParameter", "ssm:GetParametersByPath" ], "Resource": { "Fn::Sub": [ "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${parameterName}", { "parameterName": { "Ref": "ParameterName" } } ] } } ] } }, } ``` **Expected result:** It would be nice to have this `ParameterName` validated and add or remove `/` based on validation. Or accept `ParameterName` with `/` and do not add `/` while creating policy, since all SSM parameter name must start with `/` to be a valid SSM Name.
main
handle parametername value with or without when creaing generating ssmparameterreadpolicy policy description when using ssmparameterreadpolicy policy template with parametername ref ed to ssm resource ssmparameterreadpolicy will generate incorrect arn mapping for example yml mycredparameter type aws ssm parameter properties name prefix environment applications mycred type string value sub accesskeyid accesskeysecret myfunction type aws serverless function properties handler file name js runtime codeuri dir policies ssmparameterreadpolicy parametername ref mycredparameter events someevent type sns properties topic ref mytopic environment variables my cred ref mycredparameter here i am creating ssm resource and lambda function that will use this ssm parameter as well as i am using sam policy templates to allow myfunction to allow read access to mycredparameter this will deploy successfully however when lambda tries to get value from ssm params with name process env my cred it will fail after some debugging i found out that my lambda function role had permission to call arn aws ssm my region parameter prefix environment applications mycred ssm parameter instead of arn aws ssm my region parameter prefix environment applications mycred notice that extra after parameter in first arn observed result this was because sam policy templates uses parametername as is to create arn without doing any sort of validation json ssmparameterreadpolicy description gives access to a parameter to load secrets in this account if not using default key kmsdecryptpolicy will also be needed parameters parametername description the name of the secret stored in ssm in your account definition statement effect allow action ssm describeparameters resource effect allow action ssm getparameters ssm getparameter ssm getparametersbypath resource fn sub arn aws partition ssm aws region aws accountid parameter parametername parametername ref parametername expected result it would be nice to have this parametername validated and add or remove based on validation or accept parametername with and do not add while creating policy since all ssm parameter name must start with to be a valid ssm name
1
5,773
30,590,203,177
IssuesEvent
2023-07-21 16:21:27
tom-texier/ToDo-Co
https://api.github.com/repos/tom-texier/ToDo-Co
closed
Audits - 7H
maintainability
Produire un audit de code sur les deux axes suivants : **(AVANT ET APRÈS MODIFICATION)** - La qualité du code - La performance _**Qualité :** Codacy ou CodeClimate **Performance :** Profiler de Symfony, Blackfire ou New Relic_ - [x] Avant - [x] Après
True
Audits - 7H - Produire un audit de code sur les deux axes suivants : **(AVANT ET APRÈS MODIFICATION)** - La qualité du code - La performance _**Qualité :** Codacy ou CodeClimate **Performance :** Profiler de Symfony, Blackfire ou New Relic_ - [x] Avant - [x] Après
main
audits produire un audit de code sur les deux axes suivants avant et après modification la qualité du code la performance qualité codacy ou codeclimate performance profiler de symfony blackfire ou new relic avant après
1
805,349
29,518,246,551
IssuesEvent
2023-06-04 18:52:02
jrsteensen/OpenHornet
https://api.github.com/repos/jrsteensen/OpenHornet
closed
[Bug]: OH2A2A1A1-16 - JETT STATION SEL INDICATOR LEGEND DIFFUSER - listed as resin
Type: Bug/Obsolesce Category: MCAD Priority: Normal
### Discord Username Arribe ### Bug Summary Drawing / model for OH2A2A1A1-16 - JETT STATION SEL INDICATOR LEGEND DIFFUSER - listed as resin vs acrylic ### Expected Results expected part to by acrylic ### Actual Results listed as resin ### Screenshots/Images/Files _No response_ ### Applicable Part Numbers OH2A2A1A1-16 ### Release Version 1.0.0-beta.1 ### Category Mechanical (Structure/Panels/Mechanisms) ### Applicable End Item(s) Lower Instrument Panel (LIP) ### Built to print? - [X] I built (or attempted to build) the part to the OpenHornet print without any deviations. - [ ] I am not building this part to the OH print. (List deviations in detail in the Miscellaneous Info text area below.) ### Miscellaneous Info _No response_
1.0
[Bug]: OH2A2A1A1-16 - JETT STATION SEL INDICATOR LEGEND DIFFUSER - listed as resin - ### Discord Username Arribe ### Bug Summary Drawing / model for OH2A2A1A1-16 - JETT STATION SEL INDICATOR LEGEND DIFFUSER - listed as resin vs acrylic ### Expected Results expected part to by acrylic ### Actual Results listed as resin ### Screenshots/Images/Files _No response_ ### Applicable Part Numbers OH2A2A1A1-16 ### Release Version 1.0.0-beta.1 ### Category Mechanical (Structure/Panels/Mechanisms) ### Applicable End Item(s) Lower Instrument Panel (LIP) ### Built to print? - [X] I built (or attempted to build) the part to the OpenHornet print without any deviations. - [ ] I am not building this part to the OH print. (List deviations in detail in the Miscellaneous Info text area below.) ### Miscellaneous Info _No response_
non_main
jett station sel indicator legend diffuser listed as resin discord username arribe bug summary drawing model for jett station sel indicator legend diffuser listed as resin vs acrylic expected results expected part to by acrylic actual results listed as resin screenshots images files no response applicable part numbers release version beta category mechanical structure panels mechanisms applicable end item s lower instrument panel lip built to print i built or attempted to build the part to the openhornet print without any deviations i am not building this part to the oh print list deviations in detail in the miscellaneous info text area below miscellaneous info no response
0
210,329
16,095,932,506
IssuesEvent
2021-04-26 23:43:37
GaloisInc/saw-script
https://api.github.com/repos/GaloisInc/saw-script
closed
Uninformative error message for uninterpreted polymorphic functions
error-messages maybe-fixed needs test
I was trying a proof using `unint_yices` and encountered this error message Could not create sbv argument for Vec (64) (Bool)) The actual problem, as I eventually found out, is that I had asked for a polymorphic function to be uninterpreted. This error message is not much help in figuring that out!
1.0
Uninformative error message for uninterpreted polymorphic functions - I was trying a proof using `unint_yices` and encountered this error message Could not create sbv argument for Vec (64) (Bool)) The actual problem, as I eventually found out, is that I had asked for a polymorphic function to be uninterpreted. This error message is not much help in figuring that out!
non_main
uninformative error message for uninterpreted polymorphic functions i was trying a proof using unint yices and encountered this error message could not create sbv argument for vec bool the actual problem as i eventually found out is that i had asked for a polymorphic function to be uninterpreted this error message is not much help in figuring that out
0
44,108
5,584,191,059
IssuesEvent
2017-03-29 03:46:07
flutter/flutter
https://api.github.com/repos/flutter/flutter
opened
Our build times have become very long
dev: tests dev: tool - gradle performance
We had to increase the timeout of the `microbenchmarks` test. Speculatively, this seems to have coincided with the introduction of Gradle. Perhaps there's some incremental build feature we're not taking advantage of. ``` cd dev/devicelab ../../bin/cache/dart-sdk/bin/dart bin/run.dart -t bin/tasks/microbenchmarks ``` ## Logs [0b566b3](https://flutter-dashboard.appspot.com/api/get-log?ownerKey=ahNzfmZsdXR0ZXItZGFzaGJvYXJkclgLEglDaGVja2xpc3QiOGZsdXR0ZXIvZmx1dHRlci8wYjU2NmIzODUwY2E4MWRlNzdkYzFiOTQzMjdkNzFhZDg5M2E2OWI0DAsSBFRhc2sYgICAgICA0AoM)
1.0
Our build times have become very long - We had to increase the timeout of the `microbenchmarks` test. Speculatively, this seems to have coincided with the introduction of Gradle. Perhaps there's some incremental build feature we're not taking advantage of. ``` cd dev/devicelab ../../bin/cache/dart-sdk/bin/dart bin/run.dart -t bin/tasks/microbenchmarks ``` ## Logs [0b566b3](https://flutter-dashboard.appspot.com/api/get-log?ownerKey=ahNzfmZsdXR0ZXItZGFzaGJvYXJkclgLEglDaGVja2xpc3QiOGZsdXR0ZXIvZmx1dHRlci8wYjU2NmIzODUwY2E4MWRlNzdkYzFiOTQzMjdkNzFhZDg5M2E2OWI0DAsSBFRhc2sYgICAgICA0AoM)
non_main
our build times have become very long we had to increase the timeout of the microbenchmarks test speculatively this seems to have coincided with the introduction of gradle perhaps there s some incremental build feature we re not taking advantage of cd dev devicelab bin cache dart sdk bin dart bin run dart t bin tasks microbenchmarks logs
0
258
3,008,044,488
IssuesEvent
2015-07-27 19:10:07
borisblizzard/arcreator
https://api.github.com/repos/borisblizzard/arcreator
closed
Refactor the _arc_panel_info* properties
Editor Related Maintainability
The panel dispatcher uses class properties of the panel to be dispatched like `_arc_panel_info_string` and `_arc_panel_info_data` to determine how the resulting panel should be displayed in the AUI interface, where it should be initially positioned etc. these properties should be refactored into a single dict property. this will make them easier to manage (and it will look better)
True
Refactor the _arc_panel_info* properties - The panel dispatcher uses class properties of the panel to be dispatched like `_arc_panel_info_string` and `_arc_panel_info_data` to determine how the resulting panel should be displayed in the AUI interface, where it should be initially positioned etc. these properties should be refactored into a single dict property. this will make them easier to manage (and it will look better)
main
refactor the arc panel info properties the panel dispatcher uses class properties of the panel to be dispatched like arc panel info string and arc panel info data to determine how the resulting panel should be displayed in the aui interface where it should be initially positioned etc these properties should be refactored into a single dict property this will make them easier to manage and it will look better
1
1,304
5,542,120,510
IssuesEvent
2017-03-22 14:25:15
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
lineinfile module : insertbefore should insert before first match of specified regular expression
affects_2.0 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME lineinfile module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION <!--- --> ##### OS / ENVIRONMENT Ubuntu ##### SUMMARY insertbefore : If specified, the line will be inserted before the last match of specified regular expression insertafter : If specified, the line will be inserted after the last match of specified regular expression It seems to me that insertbefore should insert before the first match of the specified regexp. This would be consistent with the way that insertafter works (the reverse). In our example we are adding lines to sshd_config that need to go before any ^Match blocks. If there are multiple match blocks then insertbefore won't create a valid sshd_config ##### STEPS TO REPRODUCE `````` lineinfile: dest=/etc/ssh/sshd_config regexp="^#?PermitUserEnvironment" line="PermitUserEnvironment no" insertbefore="^Match" state=present``` `````` ##### EXPECTED RESULTS If more than one Match block is in /etc/ssh/sshd_config, the line should be inserted before the first. ##### ACTUAL RESULTS ssh throws an error: ``` /etc/ssh/sshd_config line 94: Directive 'PermitUserEnvironment' is not allowed within a Match block ```
True
lineinfile module : insertbefore should insert before first match of specified regular expression - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME lineinfile module ##### ANSIBLE VERSION ``` ansible 2.0.1.0 ``` ##### CONFIGURATION <!--- --> ##### OS / ENVIRONMENT Ubuntu ##### SUMMARY insertbefore : If specified, the line will be inserted before the last match of specified regular expression insertafter : If specified, the line will be inserted after the last match of specified regular expression It seems to me that insertbefore should insert before the first match of the specified regexp. This would be consistent with the way that insertafter works (the reverse). In our example we are adding lines to sshd_config that need to go before any ^Match blocks. If there are multiple match blocks then insertbefore won't create a valid sshd_config ##### STEPS TO REPRODUCE `````` lineinfile: dest=/etc/ssh/sshd_config regexp="^#?PermitUserEnvironment" line="PermitUserEnvironment no" insertbefore="^Match" state=present``` `````` ##### EXPECTED RESULTS If more than one Match block is in /etc/ssh/sshd_config, the line should be inserted before the first. ##### ACTUAL RESULTS ssh throws an error: ``` /etc/ssh/sshd_config line 94: Directive 'PermitUserEnvironment' is not allowed within a Match block ```
main
lineinfile module insertbefore should insert before first match of specified regular expression issue type bug report component name lineinfile module ansible version ansible configuration os environment ubuntu summary insertbefore if specified the line will be inserted before the last match of specified regular expression insertafter if specified the line will be inserted after the last match of specified regular expression it seems to me that insertbefore should insert before the first match of the specified regexp this would be consistent with the way that insertafter works the reverse in our example we are adding lines to sshd config that need to go before any match blocks if there are multiple match blocks then insertbefore won t create a valid sshd config steps to reproduce lineinfile dest etc ssh sshd config regexp permituserenvironment line permituserenvironment no insertbefore match state present expected results if more than one match block is in etc ssh sshd config the line should be inserted before the first actual results ssh throws an error etc ssh sshd config line directive permituserenvironment is not allowed within a match block
1
380,676
26,428,204,210
IssuesEvent
2023-01-14 13:18:43
moleculerjs/moleculer
https://api.github.com/repos/moleculerjs/moleculer
closed
Documentation on how-to-use-in-production
Type: Documentation Type: Question help wanted
I appreciate your hardwork and this seems to be a very promising upcoming framework for creating microservices. I have tried `Senecajs` and I didn't quite like the architecture. On other hand, `moleculer` is great but I still didn't get how to use it in production. Here are following use cases. - How am I supposed to start a service and keep it running in background. Like pm2 does. What are the dependencies needed to do so. Can you give minimalistic example? - How to call an action from another script. Do we need to create a broker? Is using event emit/broadcast meant for this thing? - Is there any molecular-cli commands (not molecular-repl) to see registered services? - What is `$node` service and why it's important? - What are `Transporters` and why they are important? - `built-in service registry & auto discovery` -| How does it work? - `moleculer-cli`, `moleculer-runner`, `molecular-repl` are all CLI interfaces. It's very hard to keep up. Can't we just combine them all. I don't have in depth knowledge of all the necessary building blocks of this module and I am sure other people would also like to know about this. Most of the documentation on the website contains configuration and API information. But I am still confused about how to use it in real life.
1.0
Documentation on how-to-use-in-production - I appreciate your hardwork and this seems to be a very promising upcoming framework for creating microservices. I have tried `Senecajs` and I didn't quite like the architecture. On other hand, `moleculer` is great but I still didn't get how to use it in production. Here are following use cases. - How am I supposed to start a service and keep it running in background. Like pm2 does. What are the dependencies needed to do so. Can you give minimalistic example? - How to call an action from another script. Do we need to create a broker? Is using event emit/broadcast meant for this thing? - Is there any molecular-cli commands (not molecular-repl) to see registered services? - What is `$node` service and why it's important? - What are `Transporters` and why they are important? - `built-in service registry & auto discovery` -| How does it work? - `moleculer-cli`, `moleculer-runner`, `molecular-repl` are all CLI interfaces. It's very hard to keep up. Can't we just combine them all. I don't have in depth knowledge of all the necessary building blocks of this module and I am sure other people would also like to know about this. Most of the documentation on the website contains configuration and API information. But I am still confused about how to use it in real life.
non_main
documentation on how to use in production i appreciate your hardwork and this seems to be a very promising upcoming framework for creating microservices i have tried senecajs and i didn t quite like the architecture on other hand moleculer is great but i still didn t get how to use it in production here are following use cases how am i supposed to start a service and keep it running in background like does what are the dependencies needed to do so can you give minimalistic example how to call an action from another script do we need to create a broker is using event emit broadcast meant for this thing is there any molecular cli commands not molecular repl to see registered services what is node service and why it s important what are transporters and why they are important built in service registry auto discovery how does it work moleculer cli moleculer runner molecular repl are all cli interfaces it s very hard to keep up can t we just combine them all i don t have in depth knowledge of all the necessary building blocks of this module and i am sure other people would also like to know about this most of the documentation on the website contains configuration and api information but i am still confused about how to use it in real life
0
288,032
24,882,269,368
IssuesEvent
2022-10-28 03:01:03
MPMG-DCC-UFMG/F01
https://api.github.com/repos/MPMG-DCC-UFMG/F01
closed
Teste de generalizacao para a tag Orçamento - Legislação - Araújos
generalization test development template - Memory (66) tag - Orçamento subtag - Legislação
DoD: Realizar o teste de Generalização do validador da tag Orçamento - Legislação para o Município de Araújos.
1.0
Teste de generalizacao para a tag Orçamento - Legislação - Araújos - DoD: Realizar o teste de Generalização do validador da tag Orçamento - Legislação para o Município de Araújos.
non_main
teste de generalizacao para a tag orçamento legislação araújos dod realizar o teste de generalização do validador da tag orçamento legislação para o município de araújos
0
297,720
22,389,707,902
IssuesEvent
2022-06-17 06:11:26
apache/camel-quarkus
https://api.github.com/repos/apache/camel-quarkus
closed
Improve openapi-java component documentation
documentation
It's user responsibility to register all model classes for reflection. We need to document it.
1.0
Improve openapi-java component documentation - It's user responsibility to register all model classes for reflection. We need to document it.
non_main
improve openapi java component documentation it s user responsibility to register all model classes for reflection we need to document it
0
139,443
18,853,443,152
IssuesEvent
2021-11-12 01:01:26
shaimael/WebGoat8
https://api.github.com/repos/shaimael/WebGoat8
opened
CVE-2021-43466 (Medium) detected in thymeleaf-spring5-3.0.11.RELEASE.jar
security vulnerability
## CVE-2021-43466 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>thymeleaf-spring5-3.0.11.RELEASE.jar</b></p></summary> <p>Modern server-side Java template engine for both web and standalone environments</p> <p>Library home page: <a href="http://www.thymeleaf.org">http://www.thymeleaf.org</a></p> <p>Path to dependency file: WebGoat8/webgoat-integration-tests/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/thymeleaf/thymeleaf-spring5/3.0.11.RELEASE/thymeleaf-spring5-3.0.11.RELEASE.jar</p> <p> Dependency Hierarchy: - webwolf-v8.1.0.jar (Root Library) - spring-boot-starter-thymeleaf-2.2.2.RELEASE.jar - :x: **thymeleaf-spring5-3.0.11.RELEASE.jar** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In the thymeleaf-spring5:3.0.12 component, thymeleaf combined with specific scenarios in template injection may lead to remote code execution. <p>Publish Date: 2021-11-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43466>CVE-2021-43466</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.thymeleaf","packageName":"thymeleaf-spring5","packageVersion":"3.0.11.RELEASE","packageFilePaths":["/webgoat-integration-tests/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.webgoat:webwolf:v8.1.0;org.springframework.boot:spring-boot-starter-thymeleaf:2.2.2.RELEASE;org.thymeleaf:thymeleaf-spring5:3.0.11.RELEASE","isMinimumFixVersionAvailable":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-43466","vulnerabilityDetails":"In the thymeleaf-spring5:3.0.12 component, thymeleaf combined with specific scenarios in template injection may lead to remote code execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43466","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-43466 (Medium) detected in thymeleaf-spring5-3.0.11.RELEASE.jar - ## CVE-2021-43466 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>thymeleaf-spring5-3.0.11.RELEASE.jar</b></p></summary> <p>Modern server-side Java template engine for both web and standalone environments</p> <p>Library home page: <a href="http://www.thymeleaf.org">http://www.thymeleaf.org</a></p> <p>Path to dependency file: WebGoat8/webgoat-integration-tests/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/thymeleaf/thymeleaf-spring5/3.0.11.RELEASE/thymeleaf-spring5-3.0.11.RELEASE.jar</p> <p> Dependency Hierarchy: - webwolf-v8.1.0.jar (Root Library) - spring-boot-starter-thymeleaf-2.2.2.RELEASE.jar - :x: **thymeleaf-spring5-3.0.11.RELEASE.jar** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In the thymeleaf-spring5:3.0.12 component, thymeleaf combined with specific scenarios in template injection may lead to remote code execution. <p>Publish Date: 2021-11-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43466>CVE-2021-43466</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.thymeleaf","packageName":"thymeleaf-spring5","packageVersion":"3.0.11.RELEASE","packageFilePaths":["/webgoat-integration-tests/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.webgoat:webwolf:v8.1.0;org.springframework.boot:spring-boot-starter-thymeleaf:2.2.2.RELEASE;org.thymeleaf:thymeleaf-spring5:3.0.11.RELEASE","isMinimumFixVersionAvailable":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-43466","vulnerabilityDetails":"In the thymeleaf-spring5:3.0.12 component, thymeleaf combined with specific scenarios in template injection may lead to remote code execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43466","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
non_main
cve medium detected in thymeleaf release jar cve medium severity vulnerability vulnerable library thymeleaf release jar modern server side java template engine for both web and standalone environments library home page a href path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository org thymeleaf thymeleaf release thymeleaf release jar dependency hierarchy webwolf jar root library spring boot starter thymeleaf release jar x thymeleaf release jar vulnerable library found in base branch main vulnerability details in the thymeleaf component thymeleaf combined with specific scenarios in template injection may lead to remote code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org owasp webgoat webwolf org springframework boot spring boot starter thymeleaf release org thymeleaf thymeleaf release isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails in the thymeleaf component thymeleaf combined with specific scenarios in template injection may lead to remote code execution vulnerabilityurl
0
184,237
14,972,909,086
IssuesEvent
2021-01-27 23:50:41
syl20bnr/spacemacs
https://api.github.com/repos/syl20bnr/spacemacs
closed
golang readme on develop.spacemacs.org is outdated
- Bug tracker - Documentation ✏ Fixed in develop Website stale
gometalinter's [repo](https://github.com/alecthomas/gometalinter/blob/master/README.md) has been archived, all support has been stopped, let's remove it as an option in [readme](http://develop.spacemacs.org/layers/+lang/go/README.html)? Or at least let's make it less visible and not the first option. Instead, let's make golangci-lint more visible.
1.0
golang readme on develop.spacemacs.org is outdated - gometalinter's [repo](https://github.com/alecthomas/gometalinter/blob/master/README.md) has been archived, all support has been stopped, let's remove it as an option in [readme](http://develop.spacemacs.org/layers/+lang/go/README.html)? Or at least let's make it less visible and not the first option. Instead, let's make golangci-lint more visible.
non_main
golang readme on develop spacemacs org is outdated gometalinter s has been archived all support has been stopped let s remove it as an option in or at least let s make it less visible and not the first option instead let s make golangci lint more visible
0
231,037
18,734,476,899
IssuesEvent
2021-11-04 04:29:12
sarahrudy/i-wanna-live-there
https://api.github.com/repos/sarahrudy/i-wanna-live-there
opened
[🧑‍💻] data load flows : page load
testing
User should see a slider of city images on page load: 😀 images of cities appear to slide across the page ☹️ a 500 error message or a broken image placeholder
1.0
[🧑‍💻] data load flows : page load - User should see a slider of city images on page load: 😀 images of cities appear to slide across the page ☹️ a 500 error message or a broken image placeholder
non_main
data load flows page load user should see a slider of city images on page load 😀 images of cities appear to slide across the page ☹️ a error message or a broken image placeholder
0
184,757
32,041,955,112
IssuesEvent
2023-09-22 20:11:59
patternfly/patternfly-design
https://api.github.com/repos/patternfly/patternfly-design
opened
Penta: designs for charts
Visual design Feature Penta PF6
## Requesting new features, enhancements, or design changes to PatternFly Determine if the color palette for charts changes for Penta and what the new palette is. Determine if there are other chart changes required for Penta and what they are. Core follow on: https://github.com/patternfly/patternfly/issues/5949
1.0
Penta: designs for charts - ## Requesting new features, enhancements, or design changes to PatternFly Determine if the color palette for charts changes for Penta and what the new palette is. Determine if there are other chart changes required for Penta and what they are. Core follow on: https://github.com/patternfly/patternfly/issues/5949
non_main
penta designs for charts requesting new features enhancements or design changes to patternfly determine if the color palette for charts changes for penta and what the new palette is determine if there are other chart changes required for penta and what they are core follow on
0
47,760
5,914,937,612
IssuesEvent
2017-05-22 05:53:37
Microsoft/vsts-tasks
https://api.github.com/repos/Microsoft/vsts-tasks
closed
VSTest: Test Impact Analysis not working for x64 Assemblies
Area: Test
The step where tests are discovered (**ListFullyQualifiedTests**) does not pass the platform variable to vstest.console.exe resulting in the following: > Test run will use DLL(s) built for framework Framework45 and platform X86. Following DLL(s) will not be part of run: > _{list of x64 assemblies}_ are built for Framework Framework45 and Platform X64. I tested on the build agent, and executing the exact same command-line, but with **/Platform:x64** added, results in expected behaviour.
1.0
VSTest: Test Impact Analysis not working for x64 Assemblies - The step where tests are discovered (**ListFullyQualifiedTests**) does not pass the platform variable to vstest.console.exe resulting in the following: > Test run will use DLL(s) built for framework Framework45 and platform X86. Following DLL(s) will not be part of run: > _{list of x64 assemblies}_ are built for Framework Framework45 and Platform X64. I tested on the build agent, and executing the exact same command-line, but with **/Platform:x64** added, results in expected behaviour.
non_main
vstest test impact analysis not working for assemblies the step where tests are discovered listfullyqualifiedtests does not pass the platform variable to vstest console exe resulting in the following test run will use dll s built for framework and platform following dll s will not be part of run list of assemblies are built for framework and platform i tested on the build agent and executing the exact same command line but with platform added results in expected behaviour
0
3,327
12,888,984,334
IssuesEvent
2020-07-13 13:52:01
ipfs/pinning-services-api-spec
https://api.github.com/repos/ipfs/pinning-services-api-spec
opened
Finalizing MVP API Spec for IPFS WebUI integration
P0 dif/expert effort/days kind/maintenance need/community-input need/maintainers-input
### About This issue tracks overall finalization status of this spec from the perspective of being ready for stakeholders to start implementation of basic functionality. cc @jacobheun @pooja @jessicaschilling ### Stakeholders - Pinning Services implementing the API - IPFS Core Impl. WG implementing API client in go-ipfs and js-ipfs - IPFS GUI Team implementing UI in WebUI / IPFS Desktop app ### Open Issues MVP Milestone view: https://github.com/ipfs/pinning-services-api-spec/milestone/1 We need to resolve these issues before API can be implemented. ### Stakeholder Sign-offs - [ ] IPFS Pinning Services - PS list TBD - [ ] IPFS Core Impl. WG - [ ] IPFS GUI Team implementing UI in WebUI / IPFS Desktop app
True
Finalizing MVP API Spec for IPFS WebUI integration - ### About This issue tracks overall finalization status of this spec from the perspective of being ready for stakeholders to start implementation of basic functionality. cc @jacobheun @pooja @jessicaschilling ### Stakeholders - Pinning Services implementing the API - IPFS Core Impl. WG implementing API client in go-ipfs and js-ipfs - IPFS GUI Team implementing UI in WebUI / IPFS Desktop app ### Open Issues MVP Milestone view: https://github.com/ipfs/pinning-services-api-spec/milestone/1 We need to resolve these issues before API can be implemented. ### Stakeholder Sign-offs - [ ] IPFS Pinning Services - PS list TBD - [ ] IPFS Core Impl. WG - [ ] IPFS GUI Team implementing UI in WebUI / IPFS Desktop app
main
finalizing mvp api spec for ipfs webui integration about this issue tracks overall finalization status of this spec from the perspective of being ready for stakeholders to start implementation of basic functionality cc jacobheun pooja jessicaschilling stakeholders pinning services implementing the api ipfs core impl wg implementing api client in go ipfs and js ipfs ipfs gui team implementing ui in webui ipfs desktop app open issues mvp milestone view we need to resolve these issues before api can be implemented stakeholder sign offs ipfs pinning services ps list tbd ipfs core impl wg ipfs gui team implementing ui in webui ipfs desktop app
1
5,305
26,791,633,301
IssuesEvent
2023-02-01 08:58:12
rustsec/advisory-db
https://api.github.com/repos/rustsec/advisory-db
closed
`daemonize` unmaintained
Unmaintained
There has been issue asking for new release since 2021-09: https://github.com/knsd/daemonize/issues/46 No commits, no communication, etc. from the author since 2021-07. There's also issue https://github.com/knsd/daemonize/issues/44#issuecomment-887287257 from 2021-07 where author promised new version in two weeks, however that never happened. Also ping @knsd, is `daemonize` crate still maintained?
True
`daemonize` unmaintained - There has been issue asking for new release since 2021-09: https://github.com/knsd/daemonize/issues/46 No commits, no communication, etc. from the author since 2021-07. There's also issue https://github.com/knsd/daemonize/issues/44#issuecomment-887287257 from 2021-07 where author promised new version in two weeks, however that never happened. Also ping @knsd, is `daemonize` crate still maintained?
main
daemonize unmaintained there has been issue asking for new release since no commits no communication etc from the author since there s also issue from where author promised new version in two weeks however that never happened also ping knsd is daemonize crate still maintained
1
3,154
12,180,032,310
IssuesEvent
2020-04-28 11:45:11
vicky002/AlgoWiki
https://api.github.com/repos/vicky002/AlgoWiki
opened
Looking for maintainers
Looking For Maintainers help wanted
Hey, Awesome devs! I've been extremely busy lately with my different startups. I'm looking for someone who is ready to maintain this repo, handle PR, make sure everything is properly structured and all changes and content are added in their respective folders. If you would like to contribute and become a maintainer of this amazing resource repository that U have been collecting over years, please comment on this issue with the following details: 1. What is your name? 2. Where are you located? 3. Are you maintaining any other similar repositories? 4. Why would you like to become an AlgoWiki maintainer? 5. Are you willing to give 2-3 hours every week to this project and make sure all user comments, PR's are reviewed regularly? 6. How can we make this repo accessible to more people? How can we bring more developer's attention to this Repository? --- I will close this issue in a week and the top 3 developers based on the response will be appointed as the maintainer of this repository.
True
Looking for maintainers - Hey, Awesome devs! I've been extremely busy lately with my different startups. I'm looking for someone who is ready to maintain this repo, handle PR, make sure everything is properly structured and all changes and content are added in their respective folders. If you would like to contribute and become a maintainer of this amazing resource repository that U have been collecting over years, please comment on this issue with the following details: 1. What is your name? 2. Where are you located? 3. Are you maintaining any other similar repositories? 4. Why would you like to become an AlgoWiki maintainer? 5. Are you willing to give 2-3 hours every week to this project and make sure all user comments, PR's are reviewed regularly? 6. How can we make this repo accessible to more people? How can we bring more developer's attention to this Repository? --- I will close this issue in a week and the top 3 developers based on the response will be appointed as the maintainer of this repository.
main
looking for maintainers hey awesome devs i ve been extremely busy lately with my different startups i m looking for someone who is ready to maintain this repo handle pr make sure everything is properly structured and all changes and content are added in their respective folders if you would like to contribute and become a maintainer of this amazing resource repository that u have been collecting over years please comment on this issue with the following details what is your name where are you located are you maintaining any other similar repositories why would you like to become an algowiki maintainer are you willing to give hours every week to this project and make sure all user comments pr s are reviewed regularly how can we make this repo accessible to more people how can we bring more developer s attention to this repository i will close this issue in a week and the top developers based on the response will be appointed as the maintainer of this repository
1
414,328
12,102,274,772
IssuesEvent
2020-04-20 16:26:09
qutebrowser/qutebrowser
https://api.github.com/repos/qutebrowser/qutebrowser
closed
Make fullscreen notification overlay timeout configurable
component: config easy priority: 1 - middle
Currently, the `FullscreenNotification` hide timeout is hardcoded to 3s (in `webenginetab.py` which creates the object). Instead, it should be configurable, including a value (probably 0 or -1 or None, depending on how other settings do this) to not show it at all. Also, it might be good to rename `content.windowed_fullscreen` to `content.fullscreen.window` if we name this something like `content.fullscreen.overlay_timeout`. See https://www.reddit.com/r/qutebrowser/comments/ezxjtl/how_to_hide_fullscreen_notification/
1.0
Make fullscreen notification overlay timeout configurable - Currently, the `FullscreenNotification` hide timeout is hardcoded to 3s (in `webenginetab.py` which creates the object). Instead, it should be configurable, including a value (probably 0 or -1 or None, depending on how other settings do this) to not show it at all. Also, it might be good to rename `content.windowed_fullscreen` to `content.fullscreen.window` if we name this something like `content.fullscreen.overlay_timeout`. See https://www.reddit.com/r/qutebrowser/comments/ezxjtl/how_to_hide_fullscreen_notification/
non_main
make fullscreen notification overlay timeout configurable currently the fullscreennotification hide timeout is hardcoded to in webenginetab py which creates the object instead it should be configurable including a value probably or or none depending on how other settings do this to not show it at all also it might be good to rename content windowed fullscreen to content fullscreen window if we name this something like content fullscreen overlay timeout see
0
153,865
12,167,376,868
IssuesEvent
2020-04-27 10:48:38
kubeflow/tf-operator
https://api.github.com/repos/kubeflow/tf-operator
closed
mnist test isn't part of CI
api/v1alpha2 area/0.4.0 area/testing lifecycle/stale priority/p2
An mnist test was added but this isn't integrated with our CI system. https://github.com/kubeflow/tf-operator/tree/master/test/e2e/dist-mnist Ideally we'd run this as part of an Argo workflow that would build and deploy the latest version of the TFJob operator /cc @ScorpioCPH /priority p2
1.0
mnist test isn't part of CI - An mnist test was added but this isn't integrated with our CI system. https://github.com/kubeflow/tf-operator/tree/master/test/e2e/dist-mnist Ideally we'd run this as part of an Argo workflow that would build and deploy the latest version of the TFJob operator /cc @ScorpioCPH /priority p2
non_main
mnist test isn t part of ci an mnist test was added but this isn t integrated with our ci system ideally we d run this as part of an argo workflow that would build and deploy the latest version of the tfjob operator cc scorpiocph priority
0
38,150
2,839,626,206
IssuesEvent
2015-05-27 14:37:57
Kunstmaan/KunstmaanBundlesCMS
https://api.github.com/repos/Kunstmaan/KunstmaanBundlesCMS
closed
Sort sub entities in pageparts is not working
Priority: Normal Profile: Frontend Type: Bugfix
If you make sub entities sortable, you have JS errors in console and sort does not work.
1.0
Sort sub entities in pageparts is not working - If you make sub entities sortable, you have JS errors in console and sort does not work.
non_main
sort sub entities in pageparts is not working if you make sub entities sortable you have js errors in console and sort does not work
0
258,485
22,322,418,450
IssuesEvent
2022-06-14 07:44:21
pingcap/tidb
https://api.github.com/repos/pingcap/tidb
opened
The generation for test cases in the planner/core/testdata package can be improved
type/enhancement sig/planner component/test
## Enhancement Now, if we want to record the test cases result in the planner/core/testdata package, we can use the argument `--record` when we run the test cases. But it will clear the other test cases results which have been already recorded. So we should improve the generation for the test cases.
1.0
The generation for test cases in the planner/core/testdata package can be improved - ## Enhancement Now, if we want to record the test cases result in the planner/core/testdata package, we can use the argument `--record` when we run the test cases. But it will clear the other test cases results which have been already recorded. So we should improve the generation for the test cases.
non_main
the generation for test cases in the planner core testdata package can be improved enhancement now if we want to record the test cases result in the planner core testdata package we can use the argument record when we run the test cases but it will clear the other test cases results which have been already recorded so we should improve the generation for the test cases
0
225,516
7,482,163,269
IssuesEvent
2018-04-04 23:42:06
ngxs/store
https://api.github.com/repos/ngxs/store
closed
Select Raw Value
domain:core priority:1 type:feature
In some cases, you need the ability to select a raw value from the store. In the use case below, we have a store that is backed by localstorage that contains the jwt token, we need the ability to get the RAW value from the token to pass in the request object. ``` @Injectable() export class JWTInterceptor implements HttpInterceptor { intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { // NEED TO GET RAW TOKEN VALUE FROM STORE req = req.clone({ setHeaders: { Authorization: `Bearer ${token}` } }); return next.handle(req); } } ``` For this API, I'm thinking about having it only on the the store instance, so it might look like: ``` store.selectValue(v => v.token); ``` Open to suggestions for API though.
1.0
Select Raw Value - In some cases, you need the ability to select a raw value from the store. In the use case below, we have a store that is backed by localstorage that contains the jwt token, we need the ability to get the RAW value from the token to pass in the request object. ``` @Injectable() export class JWTInterceptor implements HttpInterceptor { intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { // NEED TO GET RAW TOKEN VALUE FROM STORE req = req.clone({ setHeaders: { Authorization: `Bearer ${token}` } }); return next.handle(req); } } ``` For this API, I'm thinking about having it only on the the store instance, so it might look like: ``` store.selectValue(v => v.token); ``` Open to suggestions for API though.
non_main
select raw value in some cases you need the ability to select a raw value from the store in the use case below we have a store that is backed by localstorage that contains the jwt token we need the ability to get the raw value from the token to pass in the request object injectable export class jwtinterceptor implements httpinterceptor intercept req httprequest next httphandler observable need to get raw token value from store req req clone setheaders authorization bearer token return next handle req for this api i m thinking about having it only on the the store instance so it might look like store selectvalue v v token open to suggestions for api though
0
69,310
9,292,966,556
IssuesEvent
2019-03-22 06:00:11
nextbitlabs/Rapido
https://api.github.com/repos/nextbitlabs/Rapido
closed
Why CSS classes are out of scope?
Documentation under review
Please better explains the advantages of not introducing CSS classes.
1.0
Why CSS classes are out of scope? - Please better explains the advantages of not introducing CSS classes.
non_main
why css classes are out of scope please better explains the advantages of not introducing css classes
0
17,360
3,001,591,405
IssuesEvent
2015-07-24 12:31:23
alex-klock/razor-mediator-4-tridion
https://api.github.com/repos/alex-klock/razor-mediator-4-tridion
closed
How to add page directives in Razor
auto-migrated Priority-Medium question Type-Defect
``` How to add a a page directives in razor templates for SDL Tridion 2011 as below <%@page pageEncoding="UTF-8" %> ``` Original issue reported on code.google.com by `k.sheika...@gmail.com` on 25 Feb 2014 at 11:06
1.0
How to add page directives in Razor - ``` How to add a a page directives in razor templates for SDL Tridion 2011 as below <%@page pageEncoding="UTF-8" %> ``` Original issue reported on code.google.com by `k.sheika...@gmail.com` on 25 Feb 2014 at 11:06
non_main
how to add page directives in razor how to add a a page directives in razor templates for sdl tridion as below original issue reported on code google com by k sheika gmail com on feb at
0
1,683
6,574,154,290
IssuesEvent
2017-09-11 11:43:59
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Update URI documentation to remove deprecated example and add new one
affects_2.1 docs_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Documentation Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> uri ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` 2.1.2.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Trusty ##### SUMMARY <!--- Explain the problem briefly --> In the documentation for the uri module (http://docs.ansible.com/ansible/uri_module.html) the examples only show examples for HEADER_, which is deprecated as of 2.1 in favour of the "headers" argument. I forked the repo and set upI'm not really sure how to create a PR. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` - HEADER_Content-Type: "application/x-www-form-urlencoded" + headers: "{'Content-Type': 'application/json'}" ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Example showing how to use the headers argument ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> It didn't show it <!--- Paste verbatim command output between quotes below --> ``` ```
True
Update URI documentation to remove deprecated example and add new one - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Documentation Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> uri ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` 2.1.2.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Trusty ##### SUMMARY <!--- Explain the problem briefly --> In the documentation for the uri module (http://docs.ansible.com/ansible/uri_module.html) the examples only show examples for HEADER_, which is deprecated as of 2.1 in favour of the "headers" argument. I forked the repo and set upI'm not really sure how to create a PR. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` - HEADER_Content-Type: "application/x-www-form-urlencoded" + headers: "{'Content-Type': 'application/json'}" ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Example showing how to use the headers argument ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> It didn't show it <!--- Paste verbatim command output between quotes below --> ``` ```
main
update uri documentation to remove deprecated example and add new one issue type documentation report component name uri ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific trusty summary in the documentation for the uri module the examples only show examples for header which is deprecated as of in favour of the headers argument i forked the repo and set upi m not really sure how to create a pr steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used header content type application x www form urlencoded headers content type application json expected results example showing how to use the headers argument actual results it didn t show it
1
105,770
23,111,171,666
IssuesEvent
2022-07-27 13:09:55
FerretDB/FerretDB
https://api.github.com/repos/FerretDB/FerretDB
closed
`tjson`: Fix schema comparison
code/chore code/tigris
This task is a part of #683 epic. 🎯 Currently, schema comparison method `func (s *Schema) Equal(other *Schema) bool` is implemented with `reflect.DeepEqual`. The goal of this task is to come up with a better approach. The target code is located [here](https://github.com/FerretDB/FerretDB/blob/22f1fdbd0b80bdcaee4ae4486296b5fb7ef54612/internal/tjson/schema.go#L131). Tigris' data types [consist of type and format](https://docs.tigrisdata.com/overview/schema#data-types). For `int64` and `double` format is not required but might be set. Having that in mind, the following schemas must be equal: ```go doubleSchema = &Schema{ Type: Number, } doubleSchemaWithFormat = &Schema{ Type: Number, Format: Double, } ``` As well as: ```go int64Schema = &Schema{ Type: Integer, } int64SchemaWithFormat = &Schema{ Type: Integer, Format: Int64, } ``` We need to fix comparison to make such schemas equal. Checklist: - [x] Have a look at the [Tigris doc](https://docs.tigrisdata.com/overview/schema#data-types) once again to make sure that `int64` and `double` are the only cases that can cause different schemas need to be equal. - [x] Implement the changes and add tests to check that the comparison works equally for the different schemas that describe the same data type.
2.0
`tjson`: Fix schema comparison - This task is a part of #683 epic. 🎯 Currently, schema comparison method `func (s *Schema) Equal(other *Schema) bool` is implemented with `reflect.DeepEqual`. The goal of this task is to come up with a better approach. The target code is located [here](https://github.com/FerretDB/FerretDB/blob/22f1fdbd0b80bdcaee4ae4486296b5fb7ef54612/internal/tjson/schema.go#L131). Tigris' data types [consist of type and format](https://docs.tigrisdata.com/overview/schema#data-types). For `int64` and `double` format is not required but might be set. Having that in mind, the following schemas must be equal: ```go doubleSchema = &Schema{ Type: Number, } doubleSchemaWithFormat = &Schema{ Type: Number, Format: Double, } ``` As well as: ```go int64Schema = &Schema{ Type: Integer, } int64SchemaWithFormat = &Schema{ Type: Integer, Format: Int64, } ``` We need to fix comparison to make such schemas equal. Checklist: - [x] Have a look at the [Tigris doc](https://docs.tigrisdata.com/overview/schema#data-types) once again to make sure that `int64` and `double` are the only cases that can cause different schemas need to be equal. - [x] Implement the changes and add tests to check that the comparison works equally for the different schemas that describe the same data type.
non_main
tjson fix schema comparison this task is a part of epic 🎯 currently schema comparison method func s schema equal other schema bool is implemented with reflect deepequal the goal of this task is to come up with a better approach the target code is located tigris data types for and double format is not required but might be set having that in mind the following schemas must be equal go doubleschema schema type number doubleschemawithformat schema type number format double as well as go schema type integer schema type integer format we need to fix comparison to make such schemas equal checklist have a look at the once again to make sure that and double are the only cases that can cause different schemas need to be equal implement the changes and add tests to check that the comparison works equally for the different schemas that describe the same data type
0
26,175
12,880,033,554
IssuesEvent
2020-07-12 02:43:58
GoogleChrome/web.dev
https://api.github.com/repos/GoogleChrome/web.dev
closed
Content: Provide better/modern guidance on interpreting TTI vs other metrics
content update performance stale
https://web.dev/tti/ article does a fantastic job explaining precisely what TTI is... but I think could do more to help explain how that metric should be used in practice, especially compared to TBT or FID. > Time to Interactive (TTI) is an important, user-centric metric for measuring load responsiveness. It helps identify cases where a page looks interactive but actually isn't. A fast TTI helps ensure that the page is usable. Specifically: TTI as a number can jump very inelastically. It represents the moment in time after which know the page is idle, but it really doesn't say anything about _how busy_ the page was before idle. TBT does a better job measuring that in the lab (and FID is true goal in the field). Drastic improvements to your site may not change TTI at all (e.g. one lone 3p script introducing a longtask), while still improving real user experience. As such, I think the TTI article could do more to direct folks to track TBT/FID first and foremost, and treat TTI as a final boss.
True
Content: Provide better/modern guidance on interpreting TTI vs other metrics - https://web.dev/tti/ article does a fantastic job explaining precisely what TTI is... but I think could do more to help explain how that metric should be used in practice, especially compared to TBT or FID. > Time to Interactive (TTI) is an important, user-centric metric for measuring load responsiveness. It helps identify cases where a page looks interactive but actually isn't. A fast TTI helps ensure that the page is usable. Specifically: TTI as a number can jump very inelastically. It represents the moment in time after which know the page is idle, but it really doesn't say anything about _how busy_ the page was before idle. TBT does a better job measuring that in the lab (and FID is true goal in the field). Drastic improvements to your site may not change TTI at all (e.g. one lone 3p script introducing a longtask), while still improving real user experience. As such, I think the TTI article could do more to direct folks to track TBT/FID first and foremost, and treat TTI as a final boss.
non_main
content provide better modern guidance on interpreting tti vs other metrics article does a fantastic job explaining precisely what tti is but i think could do more to help explain how that metric should be used in practice especially compared to tbt or fid time to interactive tti is an important user centric metric for measuring load responsiveness it helps identify cases where a page looks interactive but actually isn t a fast tti helps ensure that the page is usable specifically tti as a number can jump very inelastically it represents the moment in time after which know the page is idle but it really doesn t say anything about how busy the page was before idle tbt does a better job measuring that in the lab and fid is true goal in the field drastic improvements to your site may not change tti at all e g one lone script introducing a longtask while still improving real user experience as such i think the tti article could do more to direct folks to track tbt fid first and foremost and treat tti as a final boss
0
1,531
6,572,225,322
IssuesEvent
2017-09-11 00:17:00
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
lxc_container container_config update is broken
affects_2.0 bug_report cloud waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lxc_container ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` (bug still present in devel branch) ##### CONFIGURATION Default Ansible configuration. ##### OS / ENVIRONMENT N/A ##### SUMMARY lxc_container handles container configuration via the container_configuration parameter. Unfortunately, the logic behind the configuration update is broken, as it duplicates the key and never replaces old values. Morever, when the configuration file contains duplicated keys, the script can fail to make any changes. ##### STEPS TO REPRODUCE Exec the following playbook ``` --- - hosts: myhost tasks: - name: fist config update lxc_container: name: mycontainer container_config: - "lxc.start.auto = 0" - name: second config update lxc_container: name: mycontainer container_config: - "lxc.start.auto = 1" ``` ##### EXPECTED RESULTS ``` $ grep start mycontainer/config lxc.start.auto = 1 ``` ##### ACTUAL RESULTS ``` $ grep start mycontainer/config lxc.start.auto = 0 lxc.start.auto = 1 ```
True
lxc_container container_config update is broken - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lxc_container ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` (bug still present in devel branch) ##### CONFIGURATION Default Ansible configuration. ##### OS / ENVIRONMENT N/A ##### SUMMARY lxc_container handles container configuration via the container_configuration parameter. Unfortunately, the logic behind the configuration update is broken, as it duplicates the key and never replaces old values. Morever, when the configuration file contains duplicated keys, the script can fail to make any changes. ##### STEPS TO REPRODUCE Exec the following playbook ``` --- - hosts: myhost tasks: - name: fist config update lxc_container: name: mycontainer container_config: - "lxc.start.auto = 0" - name: second config update lxc_container: name: mycontainer container_config: - "lxc.start.auto = 1" ``` ##### EXPECTED RESULTS ``` $ grep start mycontainer/config lxc.start.auto = 1 ``` ##### ACTUAL RESULTS ``` $ grep start mycontainer/config lxc.start.auto = 0 lxc.start.auto = 1 ```
main
lxc container container config update is broken issue type bug report component name lxc container ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides bug still present in devel branch configuration default ansible configuration os environment n a summary lxc container handles container configuration via the container configuration parameter unfortunately the logic behind the configuration update is broken as it duplicates the key and never replaces old values morever when the configuration file contains duplicated keys the script can fail to make any changes steps to reproduce exec the following playbook hosts myhost tasks name fist config update lxc container name mycontainer container config lxc start auto name second config update lxc container name mycontainer container config lxc start auto expected results grep start mycontainer config lxc start auto actual results grep start mycontainer config lxc start auto lxc start auto
1
5,097
26,007,868,564
IssuesEvent
2022-12-20 21:20:34
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Websocket Support for API Gateway
type/feature area/local/start-api stage/pm-review maintainer/need-followup
### Describe your idea/feature/enhancement As per the [announcement](https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/) of WebSocket support in API Gateway, adding WebSocket support to SAM would facilitate local development. ### Proposal Add WebSocket support to API Gateway functionality. This may be entirely out of scope of the SAM project, but came to mind when initially exploring API Gateway with WebSocket support.
True
Websocket Support for API Gateway - ### Describe your idea/feature/enhancement As per the [announcement](https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/) of WebSocket support in API Gateway, adding WebSocket support to SAM would facilitate local development. ### Proposal Add WebSocket support to API Gateway functionality. This may be entirely out of scope of the SAM project, but came to mind when initially exploring API Gateway with WebSocket support.
main
websocket support for api gateway describe your idea feature enhancement as per the of websocket support in api gateway adding websocket support to sam would facilitate local development proposal add websocket support to api gateway functionality this may be entirely out of scope of the sam project but came to mind when initially exploring api gateway with websocket support
1
811,393
30,286,088,667
IssuesEvent
2023-07-08 17:48:53
Selody-project/Backend
https://api.github.com/repos/Selody-project/Backend
closed
[BACK-TASK] GroupEvent field.
Priority 2
### 설명 `Confirmed`, `Possible`, `impossible`에 대한 Usecase가 명확하지 않습니다. 이 field에 대해서 어떻게 해야할지 상의를 한 후 변수를 지우거나 아니면 새로운 요청이 필요할 거 같습니다. https://github.com/Selody-project/Backend/blob/main/src/models/groupSchedule.js#L57 ### 링크 **Pre-requisite**: [Link to pre-req task](https://github.com/xERN-shareANDcommunity/Backend)
1.0
[BACK-TASK] GroupEvent field. - ### 설명 `Confirmed`, `Possible`, `impossible`에 대한 Usecase가 명확하지 않습니다. 이 field에 대해서 어떻게 해야할지 상의를 한 후 변수를 지우거나 아니면 새로운 요청이 필요할 거 같습니다. https://github.com/Selody-project/Backend/blob/main/src/models/groupSchedule.js#L57 ### 링크 **Pre-requisite**: [Link to pre-req task](https://github.com/xERN-shareANDcommunity/Backend)
non_main
groupevent field 설명 confirmed possible impossible 에 대한 usecase가 명확하지 않습니다 이 field에 대해서 어떻게 해야할지 상의를 한 후 변수를 지우거나 아니면 새로운 요청이 필요할 거 같습니다 링크 pre requisite
0
141,306
5,434,788,467
IssuesEvent
2017-03-05 11:00:52
ruudgreven/robocodecup
https://api.github.com/repos/ruudgreven/robocodecup
closed
Add dropdown to admin page for selecting a competition and a round and not a free-form textfield
low priority
When you upload a file, the user should type in the name of the competition and the round. This can be replaced by dropdowns.
1.0
Add dropdown to admin page for selecting a competition and a round and not a free-form textfield - When you upload a file, the user should type in the name of the competition and the round. This can be replaced by dropdowns.
non_main
add dropdown to admin page for selecting a competition and a round and not a free form textfield when you upload a file the user should type in the name of the competition and the round this can be replaced by dropdowns
0
132,815
10,764,995,450
IssuesEvent
2019-11-01 09:50:44
appium/appium
https://api.github.com/repos/appium/appium
closed
[iOS] Appium returns wrong element attribute value
ThirdParty XCUITest
## The problem I am trying to get elements attribute "value" value. At first, I print page source (pasting here just one element I am interested in): ``` <XCUIElementTypeSwitch type="XCUIElementTypeSwitch" value="1" name="s_settings_online_switch_value" label="Use online shopping" enabled="true" visible="true" x="349" y="385" width="51" height="32"/> ``` Here I clearly see that value=1. Next, I am calling GetAttribute method to find attributes "value" value. I get that value=0. Appium server log's shows: ``` 2019-10-23 09:27:02:394 [MJSONWP (d360e76d)] Calling AppiumDriver.getAttribute() with args: ["value","32010000-0000-0000-3607-000000000000","d360e76d-af5e-4883-87c8-4c6663172511"] 2019-10-23 09:27:02:394 [XCUITest] Executing command 'getAttribute' 2019-10-23 09:27:02:396 [WD Proxy] Matched '/element/32010000-0000-0000-3607-000000000000/attribute/value' to command name 'getAttribute' 2019-10-23 09:27:02:396 [WD Proxy] Proxying [GET /element/32010000-0000-0000-3607-000000000000/attribute/value] to [GET http://localhost:8103/session/1253BD09-66C2-47A6-91DA-FC8EEB3D47E3/element/32010000-0000-0000-3607-000000000000/attribute/value] with no body 2019-10-23 09:27:03:025 [WD Proxy] Got response with status 200: { 2019-10-23 09:27:03:026 [WD Proxy] "value" : "0", 2019-10-23 09:27:03:026 [WD Proxy] "sessionId" : "1253BD09-66C2-47A6-91DA-FC8EEB3D47E3" 2019-10-23 09:27:03:026 [WD Proxy] } 2019-10-23 09:27:03:026 [MJSONWP (d360e76d)] Responding to client with driver.getAttribute() result: "0" ``` I have encountered this after updating Appium from 1.14.2 to 1.15.1. ## Environment * Appium version (or git revision) that exhibits the issue: 1.15.1 * Last Appium version that did not exhibit the issue (if applicable): 1.14.2 * Desktop OS/version used to run Appium: macOS Mojave 10.14.6 * Node.js version (unless using Appium.app|exe): 10.16.3 * Npm or Yarn package manager: 6.9.0 * Mobile platform/version under test: iOS 11.0.3 * Real device or emulator/simulator: real device iPhone 6s+ * Appium CLI or Appium.app|exe: 1.15.1 ## Code To Reproduce Issue [ Good To Have ] Something like this: ``` var state = element.GetAttribute("value"); ```
1.0
[iOS] Appium returns wrong element attribute value - ## The problem I am trying to get elements attribute "value" value. At first, I print page source (pasting here just one element I am interested in): ``` <XCUIElementTypeSwitch type="XCUIElementTypeSwitch" value="1" name="s_settings_online_switch_value" label="Use online shopping" enabled="true" visible="true" x="349" y="385" width="51" height="32"/> ``` Here I clearly see that value=1. Next, I am calling GetAttribute method to find attributes "value" value. I get that value=0. Appium server log's shows: ``` 2019-10-23 09:27:02:394 [MJSONWP (d360e76d)] Calling AppiumDriver.getAttribute() with args: ["value","32010000-0000-0000-3607-000000000000","d360e76d-af5e-4883-87c8-4c6663172511"] 2019-10-23 09:27:02:394 [XCUITest] Executing command 'getAttribute' 2019-10-23 09:27:02:396 [WD Proxy] Matched '/element/32010000-0000-0000-3607-000000000000/attribute/value' to command name 'getAttribute' 2019-10-23 09:27:02:396 [WD Proxy] Proxying [GET /element/32010000-0000-0000-3607-000000000000/attribute/value] to [GET http://localhost:8103/session/1253BD09-66C2-47A6-91DA-FC8EEB3D47E3/element/32010000-0000-0000-3607-000000000000/attribute/value] with no body 2019-10-23 09:27:03:025 [WD Proxy] Got response with status 200: { 2019-10-23 09:27:03:026 [WD Proxy] "value" : "0", 2019-10-23 09:27:03:026 [WD Proxy] "sessionId" : "1253BD09-66C2-47A6-91DA-FC8EEB3D47E3" 2019-10-23 09:27:03:026 [WD Proxy] } 2019-10-23 09:27:03:026 [MJSONWP (d360e76d)] Responding to client with driver.getAttribute() result: "0" ``` I have encountered this after updating Appium from 1.14.2 to 1.15.1. ## Environment * Appium version (or git revision) that exhibits the issue: 1.15.1 * Last Appium version that did not exhibit the issue (if applicable): 1.14.2 * Desktop OS/version used to run Appium: macOS Mojave 10.14.6 * Node.js version (unless using Appium.app|exe): 10.16.3 * Npm or Yarn package manager: 6.9.0 * Mobile platform/version under test: iOS 11.0.3 * Real device or emulator/simulator: real device iPhone 6s+ * Appium CLI or Appium.app|exe: 1.15.1 ## Code To Reproduce Issue [ Good To Have ] Something like this: ``` var state = element.GetAttribute("value"); ```
non_main
appium returns wrong element attribute value the problem i am trying to get elements attribute value value at first i print page source pasting here just one element i am interested in here i clearly see that value next i am calling getattribute method to find attributes value value i get that value appium server log s shows calling appiumdriver getattribute with args executing command getattribute matched element attribute value to command name getattribute proxying to with no body got response with status value sessionid responding to client with driver getattribute result i have encountered this after updating appium from to environment appium version or git revision that exhibits the issue last appium version that did not exhibit the issue if applicable desktop os version used to run appium macos mojave node js version unless using appium app exe npm or yarn package manager mobile platform version under test ios real device or emulator simulator real device iphone appium cli or appium app exe code to reproduce issue something like this var state element getattribute value
0
43,303
5,626,014,976
IssuesEvent
2017-04-04 20:48:40
GSA/data.gov
https://api.github.com/repos/GSA/data.gov
closed
Add RSS Icon and Source name for all automated rss posts
design Feature Sprint topics usability
The Updates section of the Topic shows posts which can be either entered manually or in an automated fashion using RSS Feeds. For the one posted using RSS Feeds, show RSS icon next to the title and show source name next to it.
1.0
Add RSS Icon and Source name for all automated rss posts - The Updates section of the Topic shows posts which can be either entered manually or in an automated fashion using RSS Feeds. For the one posted using RSS Feeds, show RSS icon next to the title and show source name next to it.
non_main
add rss icon and source name for all automated rss posts the updates section of the topic shows posts which can be either entered manually or in an automated fashion using rss feeds for the one posted using rss feeds show rss icon next to the title and show source name next to it
0
4,109
19,514,735,186
IssuesEvent
2021-12-29 08:13:33
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
Remote signalers have extremely outdated ui code
Maintainability/Hinders improvements
```/obj/item/assembly/signaler/ui_interact(mob/user, flag1) . = ..() if(is_secured(user)) var/t1 = "-------" var/dat = {" <TT> <A href='byond://?src=[REF(src)];send=1'>Send Signal</A><BR> <B>Frequency/Code</B> for signaler:<BR> Frequency: <A href='byond://?src=[REF(src)];freq=-10'>-</A> <A href='byond://?src=[REF(src)];freq=-2'>-</A> [format_frequency(src.frequency)] <A href='byond://?src=[REF(src)];freq=2'>+</A> <A href='byond://?src=[REF(src)];freq=10'>+</A><BR> Code: <A href='byond://?src=[REF(src)];code=-5'>-</A> <A href='byond://?src=[REF(src)];code=-1'>-</A> [src.code] <A href='byond://?src=[REF(src)];code=1'>+</A> <A href='byond://?src=[REF(src)];code=5'>+</A><BR> [t1] </TT>"} user << browse(dat, "window=radio") onclose(user, "radio") return ``` Need I say more? The rest of the code is just as bad, might do this myself.
True
Remote signalers have extremely outdated ui code - ```/obj/item/assembly/signaler/ui_interact(mob/user, flag1) . = ..() if(is_secured(user)) var/t1 = "-------" var/dat = {" <TT> <A href='byond://?src=[REF(src)];send=1'>Send Signal</A><BR> <B>Frequency/Code</B> for signaler:<BR> Frequency: <A href='byond://?src=[REF(src)];freq=-10'>-</A> <A href='byond://?src=[REF(src)];freq=-2'>-</A> [format_frequency(src.frequency)] <A href='byond://?src=[REF(src)];freq=2'>+</A> <A href='byond://?src=[REF(src)];freq=10'>+</A><BR> Code: <A href='byond://?src=[REF(src)];code=-5'>-</A> <A href='byond://?src=[REF(src)];code=-1'>-</A> [src.code] <A href='byond://?src=[REF(src)];code=1'>+</A> <A href='byond://?src=[REF(src)];code=5'>+</A><BR> [t1] </TT>"} user << browse(dat, "window=radio") onclose(user, "radio") return ``` Need I say more? The rest of the code is just as bad, might do this myself.
main
remote signalers have extremely outdated ui code obj item assembly signaler ui interact mob user if is secured user var var dat send signal frequency code for signaler frequency code user browse dat window radio onclose user radio return need i say more the rest of the code is just as bad might do this myself
1