Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
18,300
24,412,162,130
IssuesEvent
2022-10-05 13:16:10
Project60/org.project60.sepa
https://api.github.com/repos/Project60/org.project60.sepa
closed
Recurring contributions payment processor needs certain configuration
wontfix needs funding CiviCRM 4.5 CiviCRM 4.6 payment processor
If you want to use the SDD payment processor to collect recurring contributions, your contribution page needs to have a certain setup: - 'Recurring Contributions' needs to be enabled (obviously) - 'Supported recurring units' needs to be `month` and `year` - 'Support recurring intervals' needs to be activated - The SDD payment processor needs to be selected (ideally upon page load), so that the changes can take effect. These conditions are obviously pretty restrictive, but this was the scenario by the two custom. If you need these to be removed, please think about funding this fix or providing a PR of your own.
1.0
Recurring contributions payment processor needs certain configuration - If you want to use the SDD payment processor to collect recurring contributions, your contribution page needs to have a certain setup: - 'Recurring Contributions' needs to be enabled (obviously) - 'Supported recurring units' needs to be `month` and `year` - 'Support recurring intervals' needs to be activated - The SDD payment processor needs to be selected (ideally upon page load), so that the changes can take effect. These conditions are obviously pretty restrictive, but this was the scenario by the two custom. If you need these to be removed, please think about funding this fix or providing a PR of your own.
process
recurring contributions payment processor needs certain configuration if you want to use the sdd payment processor to collect recurring contributions your contribution page needs to have a certain setup recurring contributions needs to be enabled obviously supported recurring units needs to be month and year support recurring intervals needs to be activated the sdd payment processor needs to be selected ideally upon page load so that the changes can take effect these conditions are obviously pretty restrictive but this was the scenario by the two custom if you need these to be removed please think about funding this fix or providing a pr of your own
1
16,089
20,256,227,003
IssuesEvent
2022-02-14 23:39:51
varabyte/kobweb
https://api.github.com/repos/varabyte/kobweb
opened
Use Gradle @Nested annotation?
good first issue process
Kobweb's Gradle blocks are nested, e.g. ``` kobweb { index { ... } } kobwebx { markdown { ... } } ``` We're doing this manually right now using nested `ExtensionAware.create` calls but I just learned about the `@Nested` annotation. Maybe that can clean up the code a bit?
1.0
Use Gradle @Nested annotation? - Kobweb's Gradle blocks are nested, e.g. ``` kobweb { index { ... } } kobwebx { markdown { ... } } ``` We're doing this manually right now using nested `ExtensionAware.create` calls but I just learned about the `@Nested` annotation. Maybe that can clean up the code a bit?
process
use gradle nested annotation kobweb s gradle blocks are nested e g kobweb index kobwebx markdown we re doing this manually right now using nested extensionaware create calls but i just learned about the nested annotation maybe that can clean up the code a bit
1
210,436
23,754,745,007
IssuesEvent
2022-09-01 01:12:33
benlazarine/cas-overlay
https://api.github.com/repos/benlazarine/cas-overlay
opened
CVE-2021-37714 (High) detected in jsoup-1.10.1.jar
security vulnerability
## CVE-2021-37714 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsoup-1.10.1.jar</b></p></summary> <p>jsoup HTML parser</p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/org/jsoup/jsoup/1.10.1/jsoup-1.10.1.jar</p> <p> Dependency Hierarchy: - cas-server-support-oauth-webflow-5.3.7.jar (Root Library) - wget-1.4.9.jar - :x: **jsoup-1.10.1.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jsoup is a Java library for working with HTML. Those using jsoup versions prior to 1.14.2 to parse untrusted HTML or XML may be vulnerable to DOS attacks. If the parser is run on user supplied input, an attacker may supply content that causes the parser to get stuck (loop indefinitely until cancelled), to complete more slowly than usual, or to throw an unexpected exception. This effect may support a denial of service attack. The issue is patched in version 1.14.2. There are a few available workarounds. Users may rate limit input parsing, limit the size of inputs based on system resources, and/or implement thread watchdogs to cap and timeout parse runtimes. <p>Publish Date: 2021-08-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37714>CVE-2021-37714</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://jsoup.org/news/release-1.14.2">https://jsoup.org/news/release-1.14.2</a></p> <p>Release Date: 2021-08-18</p> <p>Fix Resolution (org.jsoup:jsoup): 1.14.2</p> <p>Direct dependency fix Resolution (org.apereo.cas:cas-server-support-oauth-webflow): 6.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-37714 (High) detected in jsoup-1.10.1.jar - ## CVE-2021-37714 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jsoup-1.10.1.jar</b></p></summary> <p>jsoup HTML parser</p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/org/jsoup/jsoup/1.10.1/jsoup-1.10.1.jar</p> <p> Dependency Hierarchy: - cas-server-support-oauth-webflow-5.3.7.jar (Root Library) - wget-1.4.9.jar - :x: **jsoup-1.10.1.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jsoup is a Java library for working with HTML. Those using jsoup versions prior to 1.14.2 to parse untrusted HTML or XML may be vulnerable to DOS attacks. If the parser is run on user supplied input, an attacker may supply content that causes the parser to get stuck (loop indefinitely until cancelled), to complete more slowly than usual, or to throw an unexpected exception. This effect may support a denial of service attack. The issue is patched in version 1.14.2. There are a few available workarounds. Users may rate limit input parsing, limit the size of inputs based on system resources, and/or implement thread watchdogs to cap and timeout parse runtimes. <p>Publish Date: 2021-08-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37714>CVE-2021-37714</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://jsoup.org/news/release-1.14.2">https://jsoup.org/news/release-1.14.2</a></p> <p>Release Date: 2021-08-18</p> <p>Fix Resolution (org.jsoup:jsoup): 1.14.2</p> <p>Direct dependency fix Resolution (org.apereo.cas:cas-server-support-oauth-webflow): 6.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in jsoup jar cve high severity vulnerability vulnerable library jsoup jar jsoup html parser path to dependency file pom xml path to vulnerable library root repository org jsoup jsoup jsoup jar dependency hierarchy cas server support oauth webflow jar root library wget jar x jsoup jar vulnerable library vulnerability details jsoup is a java library for working with html those using jsoup versions prior to to parse untrusted html or xml may be vulnerable to dos attacks if the parser is run on user supplied input an attacker may supply content that causes the parser to get stuck loop indefinitely until cancelled to complete more slowly than usual or to throw an unexpected exception this effect may support a denial of service attack the issue is patched in version there are a few available workarounds users may rate limit input parsing limit the size of inputs based on system resources and or implement thread watchdogs to cap and timeout parse runtimes publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org jsoup jsoup direct dependency fix resolution org apereo cas cas server support oauth webflow step up your open source security game with mend
0
87,726
3,757,348,067
IssuesEvent
2016-03-13 22:42:39
squiggle-lang/squiggle-lang
https://api.github.com/repos/squiggle-lang/squiggle-lang
closed
Stop using `$ref`, add `declare`
enhancement high priority
Squiggle needs a way to "declare" an ambient variable, such as Node's `__dirname`, or `exports` when using Browserify, which are not global, and currently impossible to use in Squiggle without linter warnings. We could use these declarations also to take care of mutual recursion, and switch back to just reporting variable use before initialization statically. ``` declare isOdd def isEven(n) do match n case 0 then true case n then isOdd(n - 1) end end def isOdd(n) do match n case 0 then false case n then isEven(n - 1) end end ``` or ``` let {console} = global declare __dirname console.log(__dirname) ```
1.0
Stop using `$ref`, add `declare` - Squiggle needs a way to "declare" an ambient variable, such as Node's `__dirname`, or `exports` when using Browserify, which are not global, and currently impossible to use in Squiggle without linter warnings. We could use these declarations also to take care of mutual recursion, and switch back to just reporting variable use before initialization statically. ``` declare isOdd def isEven(n) do match n case 0 then true case n then isOdd(n - 1) end end def isOdd(n) do match n case 0 then false case n then isEven(n - 1) end end ``` or ``` let {console} = global declare __dirname console.log(__dirname) ```
non_process
stop using ref add declare squiggle needs a way to declare an ambient variable such as node s dirname or exports when using browserify which are not global and currently impossible to use in squiggle without linter warnings we could use these declarations also to take care of mutual recursion and switch back to just reporting variable use before initialization statically declare isodd def iseven n do match n case then true case n then isodd n end end def isodd n do match n case then false case n then iseven n end end or let console global declare dirname console log dirname
0
84,953
10,422,693,224
IssuesEvent
2019-09-16 09:36:48
reactor/reactor-core
https://api.github.com/repos/reactor/reactor-core
closed
Mention importing Java9Stubs / Java8Stubs in CONTRIBUTING.md
area/java9 type/chores type/documentation
### Expected behavior Project should be easily imported and built in any IDE. ### Actual behavior Pre-checked out project can suddenly fail to build in IntelliJ notably after the `StackWalker` stub is introduced. One needs to make sure that: - Gradle JVM matches Modules Settings JDK (preferably 1.8) - IntelliJ is configured to delegate build to Gradle (in Build Tools > Gradle > Runner), and project setting one level above uses that default - Project has been Rebuild At this point, with JDK 8 the `reactor-core/src/main/java9stubs` folder should be considered a source set by IntelliJ and project should build fine. **It would be good to have a mention of these issues in the `CONTRIBUTING.md` documentation**. ### Common symptoms - `package StackWalker does not exist` (probably building under JDK8 but java9stubs is not added to sources) - `cannot find symbol @CallerSensitive` (probably building with JDK11 and importing using JDK8) ### Reactor Core version 3.2.4.RELEASE ### JVM version (e.g. `java -version`) 1.8 (module JDK) + another version for Gradle importer
1.0
Mention importing Java9Stubs / Java8Stubs in CONTRIBUTING.md - ### Expected behavior Project should be easily imported and built in any IDE. ### Actual behavior Pre-checked out project can suddenly fail to build in IntelliJ notably after the `StackWalker` stub is introduced. One needs to make sure that: - Gradle JVM matches Modules Settings JDK (preferably 1.8) - IntelliJ is configured to delegate build to Gradle (in Build Tools > Gradle > Runner), and project setting one level above uses that default - Project has been Rebuild At this point, with JDK 8 the `reactor-core/src/main/java9stubs` folder should be considered a source set by IntelliJ and project should build fine. **It would be good to have a mention of these issues in the `CONTRIBUTING.md` documentation**. ### Common symptoms - `package StackWalker does not exist` (probably building under JDK8 but java9stubs is not added to sources) - `cannot find symbol @CallerSensitive` (probably building with JDK11 and importing using JDK8) ### Reactor Core version 3.2.4.RELEASE ### JVM version (e.g. `java -version`) 1.8 (module JDK) + another version for Gradle importer
non_process
mention importing in contributing md expected behavior project should be easily imported and built in any ide actual behavior pre checked out project can suddenly fail to build in intellij notably after the stackwalker stub is introduced one needs to make sure that gradle jvm matches modules settings jdk preferably intellij is configured to delegate build to gradle in build tools gradle runner and project setting one level above uses that default project has been rebuild at this point with jdk the reactor core src main folder should be considered a source set by intellij and project should build fine it would be good to have a mention of these issues in the contributing md documentation common symptoms package stackwalker does not exist probably building under but is not added to sources cannot find symbol callersensitive probably building with and importing using reactor core version release jvm version e g java version module jdk another version for gradle importer
0
9,936
12,972,700,677
IssuesEvent
2020-07-21 12:57:18
keep-network/keep-core
https://api.github.com/repos/keep-network/keep-core
closed
Balance reported on new overview page
:old_key: token dashboard process & client team
### Background I believe this was introduced in [this](https://github.com/keep-network/keep-core/pull/1847) PR. Last week we deployed an updated token-dashboard against Ropsten, from master. Last fully functional version of the token dashboard deployed was https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc which did not include the the overview page. The token overview balance appears to be adding total grant value + delegated value from grant to produce total balance. tl;dr think we're looking for some changes between and https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc and https://github.com/keep-network/keep-core/tree/sthompson22/token-dash/use-tbtc-v1.0.3-rc ### How the issue was surfaced Using an account with a fresh 300k token grant from the faucet I was able to observe the 300k grant balance in the overview page. I did a 100k KEEP delegation which was successful, however the overview page was showing a 400k KEEP balance now. - [Branch](https://github.com/keep-network/keep-core/tree/sthompson22/token-dash/use-tbtc-v1.0.3-rc) for deployment where issue was found. I reproduced this by using a different account, getting a 300k grant and delegating all 300k. The result in the overview page was a token balance of 600k displayed. ### How to reproduce 1. get a grant 2. check balance in overview page 3. delegate some stake 4. check balance in overview page
1.0
Balance reported on new overview page - ### Background I believe this was introduced in [this](https://github.com/keep-network/keep-core/pull/1847) PR. Last week we deployed an updated token-dashboard against Ropsten, from master. Last fully functional version of the token dashboard deployed was https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc which did not include the the overview page. The token overview balance appears to be adding total grant value + delegated value from grant to produce total balance. tl;dr think we're looking for some changes between and https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc and https://github.com/keep-network/keep-core/tree/sthompson22/token-dash/use-tbtc-v1.0.3-rc ### How the issue was surfaced Using an account with a fresh 300k token grant from the faucet I was able to observe the 300k grant balance in the overview page. I did a 100k KEEP delegation which was successful, however the overview page was showing a 400k KEEP balance now. - [Branch](https://github.com/keep-network/keep-core/tree/sthompson22/token-dash/use-tbtc-v1.0.3-rc) for deployment where issue was found. I reproduced this by using a different account, getting a 300k grant and delegating all 300k. The result in the overview page was a token balance of 600k displayed. ### How to reproduce 1. get a grant 2. check balance in overview page 3. delegate some stake 4. check balance in overview page
process
balance reported on new overview page background i believe this was introduced in pr last week we deployed an updated token dashboard against ropsten from master last fully functional version of the token dashboard deployed was which did not include the the overview page the token overview balance appears to be adding total grant value delegated value from grant to produce total balance tl dr think we re looking for some changes between and and how the issue was surfaced using an account with a fresh token grant from the faucet i was able to observe the grant balance in the overview page i did a keep delegation which was successful however the overview page was showing a keep balance now for deployment where issue was found i reproduced this by using a different account getting a grant and delegating all the result in the overview page was a token balance of displayed how to reproduce get a grant check balance in overview page delegate some stake check balance in overview page
1
8,115
11,302,386,402
IssuesEvent
2020-01-17 17:32:38
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
x86: size of a pushed segment register should be extended
Feature: Processor/x86 Type: Bug
**Describe the bug** Size of segment registers (`GS`, `FS`, `ES`, etc) on the stack should be zero-extended to the operand size. **To Reproduce** Steps to reproduce the behavior: 1. Open x86 program (32 or 64 bit). 1. Patch the listing: add `PUSH ES`, for example. 1. Stack depth changes by 16 bits (see Fig. 1), but difference should be 32 or 64 bits. **Expected behavior** Quote from the Intel's manual (from Ghidra, p. 1163): > If the source operand is a segment register (16 bits) and the operand size is 64-bits, a zero- extended value is pushed on the stack; if the operand size is 32-bits, either a zero-extended value is pushed on the stack or the segment selector is written on the stack using a 16-bit move. For the last case, all recent Core and Atom processors perform a 16-bit move, leaving the upper portion of the stack location unmodified. **Screenshots** ![push_es](https://user-images.githubusercontent.com/4244396/71314205-1caad800-2455-11ea-9cce-80ca7a5a54ba.png) > Fig. 1. Wrong change of stack depth **Environment (please complete the following information):** - OS: Linux 5.3.11 - Java Version: 13.0.1 - Ghidra Version: ccabecec
1.0
x86: size of a pushed segment register should be extended - **Describe the bug** Size of segment registers (`GS`, `FS`, `ES`, etc) on the stack should be zero-extended to the operand size. **To Reproduce** Steps to reproduce the behavior: 1. Open x86 program (32 or 64 bit). 1. Patch the listing: add `PUSH ES`, for example. 1. Stack depth changes by 16 bits (see Fig. 1), but difference should be 32 or 64 bits. **Expected behavior** Quote from the Intel's manual (from Ghidra, p. 1163): > If the source operand is a segment register (16 bits) and the operand size is 64-bits, a zero- extended value is pushed on the stack; if the operand size is 32-bits, either a zero-extended value is pushed on the stack or the segment selector is written on the stack using a 16-bit move. For the last case, all recent Core and Atom processors perform a 16-bit move, leaving the upper portion of the stack location unmodified. **Screenshots** ![push_es](https://user-images.githubusercontent.com/4244396/71314205-1caad800-2455-11ea-9cce-80ca7a5a54ba.png) > Fig. 1. Wrong change of stack depth **Environment (please complete the following information):** - OS: Linux 5.3.11 - Java Version: 13.0.1 - Ghidra Version: ccabecec
process
size of a pushed segment register should be extended describe the bug size of segment registers gs fs es etc on the stack should be zero extended to the operand size to reproduce steps to reproduce the behavior open program or bit patch the listing add push es for example stack depth changes by bits see fig   but difference should be or bits expected behavior quote from the intel s manual from ghidra p   if the source operand is a segment register bits and the operand size is bits a zero extended value is pushed on the stack if the operand size is bits either a zero extended value is pushed on the stack or the segment selector is written on the stack using a bit move for the last case all recent core and atom processors perform a bit move leaving the upper portion of the stack location unmodified screenshots fig   wrong change of stack depth environment please complete the following information os linux java version ghidra version ccabecec
1
49,119
12,289,821,225
IssuesEvent
2020-05-09 23:43:52
nunit/nunit
https://api.github.com/repos/nunit/nunit
closed
Azure DevOps does not build release branch
is:build pri:high
We need to update the configuration so that the release branch builds automatically along with any PRs into the release branch. For future releases, I would like to start building and releasing from there. For 7.12 I am testing the process by preparing a pull request into the release branch and I will release the artifacts that are built with it. CC @jnm2
1.0
Azure DevOps does not build release branch - We need to update the configuration so that the release branch builds automatically along with any PRs into the release branch. For future releases, I would like to start building and releasing from there. For 7.12 I am testing the process by preparing a pull request into the release branch and I will release the artifacts that are built with it. CC @jnm2
non_process
azure devops does not build release branch we need to update the configuration so that the release branch builds automatically along with any prs into the release branch for future releases i would like to start building and releasing from there for i am testing the process by preparing a pull request into the release branch and i will release the artifacts that are built with it cc
0
18,130
24,168,254,787
IssuesEvent
2022-09-22 16:47:13
GoogleCloudPlatform/terraform-mean-cloudrun-mongodb
https://api.github.com/repos/GoogleCloudPlatform/terraform-mean-cloudrun-mongodb
closed
Migrate this repo to final destination when ready
process
- [ ] Need to determine ultimate home for the project. - [ ] Migrate. Considerations: - #11 - Or just keep this repo
1.0
Migrate this repo to final destination when ready - - [ ] Need to determine ultimate home for the project. - [ ] Migrate. Considerations: - #11 - Or just keep this repo
process
migrate this repo to final destination when ready need to determine ultimate home for the project migrate considerations or just keep this repo
1
43,083
5,575,204,343
IssuesEvent
2017-03-28 00:55:33
18F/fec-cms
https://api.github.com/repos/18F/fec-cms
closed
Implement and test site-wide search prototype
Work: Design Work: Front-end
This is an exploratory issue to help us better understand the user needs (and potential ways to address them) around a site-wide search. To do this, we will use a combination of combing through external resources and doing lightweight prototyping using the out-of-the-box DigitalGov search tools. **Questions to consider** - How do we handle search results from the new and legacy sites? - Is there a typeahead component to allow users to jump directly to certain pages? - How do users expect results to be presented? What sorts of things do they expect to search through? **Completion criteria** - [x] Examine search logs from current fec.gov for trends and patterns - [x] Gather and synthesize external research on site-wide search patterns and best practices - [x] Create a basic implementation of the DigitalGov search tool (perhaps with some version of a typeahead) - [x] Test the prototype with users - [x] Consolidate findings and make recommendations for next steps
1.0
Implement and test site-wide search prototype - This is an exploratory issue to help us better understand the user needs (and potential ways to address them) around a site-wide search. To do this, we will use a combination of combing through external resources and doing lightweight prototyping using the out-of-the-box DigitalGov search tools. **Questions to consider** - How do we handle search results from the new and legacy sites? - Is there a typeahead component to allow users to jump directly to certain pages? - How do users expect results to be presented? What sorts of things do they expect to search through? **Completion criteria** - [x] Examine search logs from current fec.gov for trends and patterns - [x] Gather and synthesize external research on site-wide search patterns and best practices - [x] Create a basic implementation of the DigitalGov search tool (perhaps with some version of a typeahead) - [x] Test the prototype with users - [x] Consolidate findings and make recommendations for next steps
non_process
implement and test site wide search prototype this is an exploratory issue to help us better understand the user needs and potential ways to address them around a site wide search to do this we will use a combination of combing through external resources and doing lightweight prototyping using the out of the box digitalgov search tools questions to consider how do we handle search results from the new and legacy sites is there a typeahead component to allow users to jump directly to certain pages how do users expect results to be presented what sorts of things do they expect to search through completion criteria examine search logs from current fec gov for trends and patterns gather and synthesize external research on site wide search patterns and best practices create a basic implementation of the digitalgov search tool perhaps with some version of a typeahead test the prototype with users consolidate findings and make recommendations for next steps
0
9,557
12,517,389,605
IssuesEvent
2020-06-03 11:03:38
lishu/vscode-svg2
https://api.github.com/repos/lishu/vscode-svg2
closed
[Enhancement idea] Option to open the preview only by right-clicking
In process
Hello, What do you think about adding the option to open the preview by right-clicking? I would like to right click and open the preview (**without the source code of the file**) It will be very very very useful to preview SVG files when you have many files that you want to open This is the native way viewing JPG files in VSCode
1.0
[Enhancement idea] Option to open the preview only by right-clicking - Hello, What do you think about adding the option to open the preview by right-clicking? I would like to right click and open the preview (**without the source code of the file**) It will be very very very useful to preview SVG files when you have many files that you want to open This is the native way viewing JPG files in VSCode
process
option to open the preview only by right clicking hello what do you think about adding the option to open the preview by right clicking i would like to right click and open the preview without the source code of the file it will be very very very useful to preview svg files when you have many files that you want to open this is the native way viewing jpg files in vscode
1
7,706
10,800,079,802
IssuesEvent
2019-11-06 13:34:01
prisma-labs/issues
https://api.github.com/repos/prisma-labs/issues
opened
Public notion content/process?
type/process
We have an internal notion page where a lot of valuable planning and process content lives. Some of that might sometimes be of interest to the current/future community. It also makes our lives easier when we can share/link people to strong content that addresses their questions e.g. "why is this inactive" "what are you guys currently focused on" "what is your roadmap". For example maybe we want https://github.com/prisma-labs/issues/issues/9 to be publicly accessible. Or perhaps a hiring table about open opportunities. Another benefit might be more motivation for us to keep the content quality up. We've been pretty good so far (?) but it is demanding to be sure. But if we know there is greater value beyond just an internal 1-2 people, well, that means something. One tradeoff is that it adds overhead for us internally to keep track of private/public content. Generally it doesn't matter, but its also easy right now because we just don't have to think about it (there are two exceptions in the tech docs table where pages have been made public). Possible content we can consider: - tech docs - roadmap - sprints - quarters - teamflow So concrete things we need to do: 1. General decision on if we want anything public 2. If so, what parts? Some, most, all? 3. For the chosen content, how do we re-structure our Labs page downward (think of it as as tree of content where the labs home page is the root) to accommodate the mix of private and public.
1.0
Public notion content/process? - We have an internal notion page where a lot of valuable planning and process content lives. Some of that might sometimes be of interest to the current/future community. It also makes our lives easier when we can share/link people to strong content that addresses their questions e.g. "why is this inactive" "what are you guys currently focused on" "what is your roadmap". For example maybe we want https://github.com/prisma-labs/issues/issues/9 to be publicly accessible. Or perhaps a hiring table about open opportunities. Another benefit might be more motivation for us to keep the content quality up. We've been pretty good so far (?) but it is demanding to be sure. But if we know there is greater value beyond just an internal 1-2 people, well, that means something. One tradeoff is that it adds overhead for us internally to keep track of private/public content. Generally it doesn't matter, but its also easy right now because we just don't have to think about it (there are two exceptions in the tech docs table where pages have been made public). Possible content we can consider: - tech docs - roadmap - sprints - quarters - teamflow So concrete things we need to do: 1. General decision on if we want anything public 2. If so, what parts? Some, most, all? 3. For the chosen content, how do we re-structure our Labs page downward (think of it as as tree of content where the labs home page is the root) to accommodate the mix of private and public.
process
public notion content process we have an internal notion page where a lot of valuable planning and process content lives some of that might sometimes be of interest to the current future community it also makes our lives easier when we can share link people to strong content that addresses their questions e g why is this inactive what are you guys currently focused on what is your roadmap for example maybe we want to be publicly accessible or perhaps a hiring table about open opportunities another benefit might be more motivation for us to keep the content quality up we ve been pretty good so far but it is demanding to be sure but if we know there is greater value beyond just an internal people well that means something one tradeoff is that it adds overhead for us internally to keep track of private public content generally it doesn t matter but its also easy right now because we just don t have to think about it there are two exceptions in the tech docs table where pages have been made public possible content we can consider tech docs roadmap sprints quarters teamflow so concrete things we need to do general decision on if we want anything public if so what parts some most all for the chosen content how do we re structure our labs page downward think of it as as tree of content where the labs home page is the root to accommodate the mix of private and public
1
63,146
17,394,015,205
IssuesEvent
2021-08-02 11:07:53
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Video doesn't resume one way after hold
A-VoIP T-Defect
Looks like when we resume a video call from being held, the video only resumes one way. When I tested, it stayed black on the side that un-held.
1.0
Video doesn't resume one way after hold - Looks like when we resume a video call from being held, the video only resumes one way. When I tested, it stayed black on the side that un-held.
non_process
video doesn t resume one way after hold looks like when we resume a video call from being held the video only resumes one way when i tested it stayed black on the side that un held
0
3,762
6,734,899,697
IssuesEvent
2017-10-18 19:43:39
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Jazz up the Readme
process: contributing
The `README.md` for this repo is kind of sad. 😢 I would like to see the readme more focused on explaining what cypress is, why you would use cypress, **simple install instructions**, etc - for our users and have contributing the secondary focus. **Ideas to jazz up** - Add header from old readme branch. - Embed 'Why Cypress' video directly into readme. - Have simple 'npm install' instructions. - Add some cool badges. (gitter, npm package) - Don't make it too long - refer to the docs when necessary.
1.0
Jazz up the Readme - The `README.md` for this repo is kind of sad. 😢 I would like to see the readme more focused on explaining what cypress is, why you would use cypress, **simple install instructions**, etc - for our users and have contributing the secondary focus. **Ideas to jazz up** - Add header from old readme branch. - Embed 'Why Cypress' video directly into readme. - Have simple 'npm install' instructions. - Add some cool badges. (gitter, npm package) - Don't make it too long - refer to the docs when necessary.
process
jazz up the readme the readme md for this repo is kind of sad 😢 i would like to see the readme more focused on explaining what cypress is why you would use cypress simple install instructions etc for our users and have contributing the secondary focus ideas to jazz up add header from old readme branch embed why cypress video directly into readme have simple npm install instructions add some cool badges gitter npm package don t make it too long refer to the docs when necessary
1
9,450
12,429,227,436
IssuesEvent
2020-05-25 08:03:15
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
ssp534-over question?
preprocessor question
I've encountered a new experiment that I've never encoutered before. The ScenarioMIP experiment, `ssp534-over` branches from the run `ssp585` in 2040. If I put this in a recipe like this: ``` exp: [historical, ssp585, ssp534-over] ``` Will ESMValTool be able to figure out that we don't want any data from `ssp585` after 2040?
1.0
ssp534-over question? - I've encountered a new experiment that I've never encoutered before. The ScenarioMIP experiment, `ssp534-over` branches from the run `ssp585` in 2040. If I put this in a recipe like this: ``` exp: [historical, ssp585, ssp534-over] ``` Will ESMValTool be able to figure out that we don't want any data from `ssp585` after 2040?
process
over question i ve encountered a new experiment that i ve never encoutered before the scenariomip experiment over branches from the run in if i put this in a recipe like this exp will esmvaltool be able to figure out that we don t want any data from after
1
20,501
27,166,344,126
IssuesEvent
2023-02-17 15:37:46
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Native query fails with commented-out parameter
Type:Bug Priority:P2 Querying/Processor Querying/Parameters & Variables Querying/Native .Backend
*Environment*: - Your browser and the version: 66.0.3359.139 - Your operating system: OSX 10.13.3 - Your databases: Sample dataset - Metabase version: v0.28.5 - Metabase hosting environment: Unknown - Metabase internal database: Redshift *Repeatable steps to reproduce the issue*: 1. Create a new native SQL query 2. Enter query `SELECT * FROM REVIEWS WHERE PRODUCT_ID = 1 -- {{target_product_id}}` 3. Run query 4. Observe the error message `Unable to substitute 'target_product_id': param not specified. Found: ("target_product_id")` or `Cannot run the query: missing required parameters: #{"target_product_id"}` The issue seems to be that the parser for detecting parameters is not aware of comments, so will pick up parameters that are commented out. :arrow_down: Please click the :+1: reaction instead of leaving a `+1` or `update?` comment
1.0
Native query fails with commented-out parameter - *Environment*: - Your browser and the version: 66.0.3359.139 - Your operating system: OSX 10.13.3 - Your databases: Sample dataset - Metabase version: v0.28.5 - Metabase hosting environment: Unknown - Metabase internal database: Redshift *Repeatable steps to reproduce the issue*: 1. Create a new native SQL query 2. Enter query `SELECT * FROM REVIEWS WHERE PRODUCT_ID = 1 -- {{target_product_id}}` 3. Run query 4. Observe the error message `Unable to substitute 'target_product_id': param not specified. Found: ("target_product_id")` or `Cannot run the query: missing required parameters: #{"target_product_id"}` The issue seems to be that the parser for detecting parameters is not aware of comments, so will pick up parameters that are commented out. :arrow_down: Please click the :+1: reaction instead of leaving a `+1` or `update?` comment
process
native query fails with commented out parameter environment your browser and the version your operating system osx your databases sample dataset metabase version metabase hosting environment unknown metabase internal database redshift repeatable steps to reproduce the issue create a new native sql query enter query select from reviews where product id target product id run query observe the error message unable to substitute target product id param not specified found target product id or cannot run the query missing required parameters target product id the issue seems to be that the parser for detecting parameters is not aware of comments so will pick up parameters that are commented out arrow down please click the reaction instead of leaving a or update comment
1
15,799
19,987,126,090
IssuesEvent
2022-01-30 20:40:38
maticnetwork/miden
https://api.github.com/repos/maticnetwork/miden
closed
Add fmp register the processor with associated operations
processor
As discussed in #73, to better support memory-based locals, we need to add `fmp` register to the processor. Operations needed to manipulate this register have been stubbed out in #83 - but we also need to implement them.
1.0
Add fmp register the processor with associated operations - As discussed in #73, to better support memory-based locals, we need to add `fmp` register to the processor. Operations needed to manipulate this register have been stubbed out in #83 - but we also need to implement them.
process
add fmp register the processor with associated operations as discussed in to better support memory based locals we need to add fmp register to the processor operations needed to manipulate this register have been stubbed out in but we also need to implement them
1
306,516
23,163,694,709
IssuesEvent
2022-07-29 20:59:50
gravitational/teleport
https://api.github.com/repos/gravitational/teleport
closed
[v.10.0] /docs/pages/enterprise/workflow/resource-requests.mdx | Clarify that Resource Access Request is in preview
documentation access-requests
## Details Make it clear in the docs that Resource Access Request is in preview. ### Category - Improve Existing
1.0
[v.10.0] /docs/pages/enterprise/workflow/resource-requests.mdx | Clarify that Resource Access Request is in preview - ## Details Make it clear in the docs that Resource Access Request is in preview. ### Category - Improve Existing
non_process
docs pages enterprise workflow resource requests mdx clarify that resource access request is in preview details make it clear in the docs that resource access request is in preview category improve existing
0
245,329
26,541,516,887
IssuesEvent
2023-01-19 19:41:07
gms-ws-sandbox/Java-Demo
https://api.github.com/repos/gms-ws-sandbox/Java-Demo
closed
CVE-2021-40690 (High) detected in xmlsec-2.1.4.jar - autoclosed
security vulnerability
## CVE-2021-40690 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlsec-2.1.4.jar</b></p></summary> <p>Apache XML Security for Java supports XML-Signature Syntax and Processing, W3C Recommendation 12 February 2002, and XML Encryption Syntax and Processing, W3C Recommendation 10 December 2002. As of version 1.4, the library supports the standard Java API JSR-105: XML Digital Signature APIs.</p> <p>Library home page: <a href="https://santuario.apache.org/">https://santuario.apache.org/</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/santuario/xmlsec/2.1.4/xmlsec-2.1.4.jar,/target/owaspSecurityShepherd/WEB-INF/lib/xmlsec-2.1.4.jar</p> <p> Dependency Hierarchy: - :x: **xmlsec-2.1.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/Java-Demo/commit/2f8c3546469435a8572a450a0e6ad10c4fe187f7">2f8c3546469435a8572a450a0e6ad10c4fe187f7</a></p> <p>Found in base branch: <b>dev</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of Apache Santuario - XML Security for Java prior to 2.2.3 and 2.1.7 are vulnerable to an issue where the "secureValidation" property is not passed correctly when creating a KeyInfo from a KeyInfoReference element. This allows an attacker to abuse an XPath Transform to extract any local .xml files in a RetrievalMethod element. <p>Publish Date: 2021-09-19 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-40690>CVE-2021-40690</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-40690">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-40690</a></p> <p>Release Date: 2021-09-19</p> <p>Fix Resolution: 2.1.7</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2021-40690 (High) detected in xmlsec-2.1.4.jar - autoclosed - ## CVE-2021-40690 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlsec-2.1.4.jar</b></p></summary> <p>Apache XML Security for Java supports XML-Signature Syntax and Processing, W3C Recommendation 12 February 2002, and XML Encryption Syntax and Processing, W3C Recommendation 10 December 2002. As of version 1.4, the library supports the standard Java API JSR-105: XML Digital Signature APIs.</p> <p>Library home page: <a href="https://santuario.apache.org/">https://santuario.apache.org/</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/santuario/xmlsec/2.1.4/xmlsec-2.1.4.jar,/target/owaspSecurityShepherd/WEB-INF/lib/xmlsec-2.1.4.jar</p> <p> Dependency Hierarchy: - :x: **xmlsec-2.1.4.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/Java-Demo/commit/2f8c3546469435a8572a450a0e6ad10c4fe187f7">2f8c3546469435a8572a450a0e6ad10c4fe187f7</a></p> <p>Found in base branch: <b>dev</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of Apache Santuario - XML Security for Java prior to 2.2.3 and 2.1.7 are vulnerable to an issue where the "secureValidation" property is not passed correctly when creating a KeyInfo from a KeyInfoReference element. This allows an attacker to abuse an XPath Transform to extract any local .xml files in a RetrievalMethod element. <p>Publish Date: 2021-09-19 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-40690>CVE-2021-40690</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-40690">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-40690</a></p> <p>Release Date: 2021-09-19</p> <p>Fix Resolution: 2.1.7</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_process
cve high detected in xmlsec jar autoclosed cve high severity vulnerability vulnerable library xmlsec jar apache xml security for java supports xml signature syntax and processing recommendation february and xml encryption syntax and processing recommendation december as of version the library supports the standard java api jsr xml digital signature apis library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org apache santuario xmlsec xmlsec jar target owaspsecurityshepherd web inf lib xmlsec jar dependency hierarchy x xmlsec jar vulnerable library found in head commit a href found in base branch dev vulnerability details all versions of apache santuario xml security for java prior to and are vulnerable to an issue where the securevalidation property is not passed correctly when creating a keyinfo from a keyinforeference element this allows an attacker to abuse an xpath transform to extract any local xml files in a retrievalmethod element publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr
0
380,485
26,421,308,687
IssuesEvent
2023-01-13 20:48:33
ufelgen/capstone-project
https://api.github.com/repos/ufelgen/capstone-project
closed
Checklist for QA Database Connection
documentation
Following [this US](https://github.com/ufelgen/capstone-project/issues/35) I replaced the local storage by a database connection. Therefore I had to change all the `create` and `update` functions, so they need to be quality tested again. As these are quite a lot of functions, I made a checklist of functions for QA. ### main page <img width="334" alt="Bildschirmfoto 2023-01-13 um 12 33 46" src="https://user-images.githubusercontent.com/115288914/212311078-a7c8bd9a-ac18-4eca-b3b8-bb5d2292c281.png"> - [ ] add a new word (form at the top of the page) - you can put this in a new category so you can - [ ] edit the category name (... button opens the popup menu, see image above) and - [ ] delete this category ### category pages <img width="331" alt="Bildschirmfoto 2023-01-13 um 12 36 29" src="https://user-images.githubusercontent.com/115288914/212311552-1b577c84-8be7-447a-9b5f-7dfd4e3201fa.png"> open the ... popup menu (see above) to - [ ] edit the word - [ ] add another translation - [ ] edit the word again (now with two translations) - [ ] add notes to a word - [ ] edit those notes - [ ] add a declension to a noun - [ ] edit this declension - [ ] delete the word ### conjugations conjugations are carried out automatically for a word in Slovenian upon providing some info in a form. to test this, you can - [ ] add a new word "kupiti" - "to buy" to the "verbs" category via the form on the main page, then - [ ] route to the "verbs" page, open the popup menu for the newly added "kupiti", select "+ conjugation" and write in the three form fields "kupi / kupil / kupila", and click "add" the conjugation page should open automatically, where you can - [ ] edit the present - [ ] edit the past - [ ] edit the future Thank you 🙏
1.0
Checklist for QA Database Connection - Following [this US](https://github.com/ufelgen/capstone-project/issues/35) I replaced the local storage by a database connection. Therefore I had to change all the `create` and `update` functions, so they need to be quality tested again. As these are quite a lot of functions, I made a checklist of functions for QA. ### main page <img width="334" alt="Bildschirmfoto 2023-01-13 um 12 33 46" src="https://user-images.githubusercontent.com/115288914/212311078-a7c8bd9a-ac18-4eca-b3b8-bb5d2292c281.png"> - [ ] add a new word (form at the top of the page) - you can put this in a new category so you can - [ ] edit the category name (... button opens the popup menu, see image above) and - [ ] delete this category ### category pages <img width="331" alt="Bildschirmfoto 2023-01-13 um 12 36 29" src="https://user-images.githubusercontent.com/115288914/212311552-1b577c84-8be7-447a-9b5f-7dfd4e3201fa.png"> open the ... popup menu (see above) to - [ ] edit the word - [ ] add another translation - [ ] edit the word again (now with two translations) - [ ] add notes to a word - [ ] edit those notes - [ ] add a declension to a noun - [ ] edit this declension - [ ] delete the word ### conjugations conjugations are carried out automatically for a word in Slovenian upon providing some info in a form. to test this, you can - [ ] add a new word "kupiti" - "to buy" to the "verbs" category via the form on the main page, then - [ ] route to the "verbs" page, open the popup menu for the newly added "kupiti", select "+ conjugation" and write in the three form fields "kupi / kupil / kupila", and click "add" the conjugation page should open automatically, where you can - [ ] edit the present - [ ] edit the past - [ ] edit the future Thank you 🙏
non_process
checklist for qa database connection following i replaced the local storage by a database connection therefore i had to change all the create and update functions so they need to be quality tested again as these are quite a lot of functions i made a checklist of functions for qa main page img width alt bildschirmfoto um src add a new word form at the top of the page you can put this in a new category so you can edit the category name button opens the popup menu see image above and delete this category category pages img width alt bildschirmfoto um src open the popup menu see above to edit the word add another translation edit the word again now with two translations add notes to a word edit those notes add a declension to a noun edit this declension delete the word conjugations conjugations are carried out automatically for a word in slovenian upon providing some info in a form to test this you can add a new word kupiti to buy to the verbs category via the form on the main page then route to the verbs page open the popup menu for the newly added kupiti select conjugation and write in the three form fields kupi kupil kupila and click add the conjugation page should open automatically where you can edit the present edit the past edit the future thank you 🙏
0
2,336
5,142,799,307
IssuesEvent
2017-01-12 14:23:45
Graylog2/graylog2-server
https://api.github.com/repos/Graylog2/graylog2-server
closed
Slow syslog processing speed with force_rdns enabled
feature improvement performance processing won't fix
There seems to be a problem when the `force_rdns` option for Syslog inputs is enabled. Users report high CPU usage and reduced message throughput. See: https://groups.google.com/forum/?hl=en#!topic/graylog2/bW2glCdBIUI
1.0
Slow syslog processing speed with force_rdns enabled - There seems to be a problem when the `force_rdns` option for Syslog inputs is enabled. Users report high CPU usage and reduced message throughput. See: https://groups.google.com/forum/?hl=en#!topic/graylog2/bW2glCdBIUI
process
slow syslog processing speed with force rdns enabled there seems to be a problem when the force rdns option for syslog inputs is enabled users report high cpu usage and reduced message throughput see
1
16,828
22,061,610,807
IssuesEvent
2022-05-30 18:43:49
bitPogo/kmock
https://api.github.com/repos/bitPogo/kmock
closed
Prohibit deceiving kspy for generics
kmock-processor
Currently `kspy` is generated for kspy even if they are not enabled. While using them it will simply fails. Acceptance criteria: * do not generate the deceiving factories
1.0
Prohibit deceiving kspy for generics - Currently `kspy` is generated for kspy even if they are not enabled. While using them it will simply fails. Acceptance criteria: * do not generate the deceiving factories
process
prohibit deceiving kspy for generics currently kspy is generated for kspy even if they are not enabled while using them it will simply fails acceptance criteria do not generate the deceiving factories
1
3,853
2,708,302,946
IssuesEvent
2015-04-08 07:56:45
AprilFool/AprilFool
https://api.github.com/repos/AprilFool/AprilFool
opened
어떤 web framework을 사용할 것인가?
design
대략 아래 1. beego: full-feature, MVC, 중국에서 활발히 사용? 2. revel: almost full-feature 3. martini 4. gorilla: web tool kit 5. gocraft/web: web tool kit 5. net/http: golang built-in beego나 revel을 사용하거나 gorilla+net/http, gocraft+net/http가 일반적인 방법인 것 같음. 특이한 웹서비스를 개발할 것이 아니므로 빠른 개발을 위해서 full-feature framework이 좋을 것 같음. beego는 거의 중국에서 사용됨. revel로 가는 것이 좋을 것 같음. golang을 개발한 곳이 google인지라 기본 net/http의 기능이 매우 강력하다고함. 어떤 프레임웍을 사용하던 net/http를 공부할 필요가 있음. 참고할만한 글들 http://kasw.blogspot.kr/2014/10/pythongolang-web-framework.html http://www.reddit.com/r/golang/comments/1yh6gm/new_to_go_trying_to_select_web_framework/ https://corner.squareup.com/2014/05/evaluating-go-frameworks.html http://www.quora.com/What-are-the-best-web-frameworks-for-the-Go-language-What-are-their-pros-and-cons http://codecondo.com/4-minimal-web-frameworks-go-golang/ http://thenewstack.io/a-survey-of-5-go-web-frameworks/
1.0
어떤 web framework을 사용할 것인가? - 대략 아래 1. beego: full-feature, MVC, 중국에서 활발히 사용? 2. revel: almost full-feature 3. martini 4. gorilla: web tool kit 5. gocraft/web: web tool kit 5. net/http: golang built-in beego나 revel을 사용하거나 gorilla+net/http, gocraft+net/http가 일반적인 방법인 것 같음. 특이한 웹서비스를 개발할 것이 아니므로 빠른 개발을 위해서 full-feature framework이 좋을 것 같음. beego는 거의 중국에서 사용됨. revel로 가는 것이 좋을 것 같음. golang을 개발한 곳이 google인지라 기본 net/http의 기능이 매우 강력하다고함. 어떤 프레임웍을 사용하던 net/http를 공부할 필요가 있음. 참고할만한 글들 http://kasw.blogspot.kr/2014/10/pythongolang-web-framework.html http://www.reddit.com/r/golang/comments/1yh6gm/new_to_go_trying_to_select_web_framework/ https://corner.squareup.com/2014/05/evaluating-go-frameworks.html http://www.quora.com/What-are-the-best-web-frameworks-for-the-Go-language-What-are-their-pros-and-cons http://codecondo.com/4-minimal-web-frameworks-go-golang/ http://thenewstack.io/a-survey-of-5-go-web-frameworks/
non_process
어떤 web framework을 사용할 것인가 대략 아래 beego full feature mvc 중국에서 활발히 사용 revel almost full feature martini gorilla web tool kit gocraft web web tool kit net http golang built in beego나 revel을 사용하거나 gorilla net http gocraft net http가 일반적인 방법인 것 같음 특이한 웹서비스를 개발할 것이 아니므로 빠른 개발을 위해서 full feature framework이 좋을 것 같음 beego는 거의 중국에서 사용됨 revel로 가는 것이 좋을 것 같음 golang을 개발한 곳이 google인지라 기본 net http의 기능이 매우 강력하다고함 어떤 프레임웍을 사용하던 net http를 공부할 필요가 있음 참고할만한 글들
0
11,128
13,957,688,116
IssuesEvent
2020-10-24 08:09:26
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
BE: download link wfs
BE - Belgium Geoportal Harvesting process
Dear Some datesets that are downloadable through a WFS service get a downloadlink in the form of getfeature/typename. Other datasets get a downlink in the form of getfeare/storedqueryid. How does the harvester or the resource linkage checker decide what kind of downloadlink to create? For our harmonized data/services the storedquery option is ok. For our older services serving not harmonized data we would prefer to have the first option. Bart
1.0
BE: download link wfs - Dear Some datesets that are downloadable through a WFS service get a downloadlink in the form of getfeature/typename. Other datasets get a downlink in the form of getfeare/storedqueryid. How does the harvester or the resource linkage checker decide what kind of downloadlink to create? For our harmonized data/services the storedquery option is ok. For our older services serving not harmonized data we would prefer to have the first option. Bart
process
be download link wfs dear some datesets that are downloadable through a wfs service get a downloadlink in the form of getfeature typename other datasets get a downlink in the form of getfeare storedqueryid how does the harvester or the resource linkage checker decide what kind of downloadlink to create for our harmonized data services the storedquery option is ok for our older services serving not harmonized data we would prefer to have the first option bart
1
19,360
25,491,460,837
IssuesEvent
2022-11-27 05:16:31
hsmusic/hsmusic-wiki
https://api.github.com/repos/hsmusic/hsmusic-wiki
closed
"Artists - by Contributions" should divide between music and artworks
scope: data processing scope: page generation - content thing: listings
This would make it match with "Artists - by Latest Contribution". Thanks for the suggestion, Niklink!
1.0
"Artists - by Contributions" should divide between music and artworks - This would make it match with "Artists - by Latest Contribution". Thanks for the suggestion, Niklink!
process
artists by contributions should divide between music and artworks this would make it match with artists by latest contribution thanks for the suggestion niklink
1
32,755
6,917,539,373
IssuesEvent
2017-11-29 08:57:04
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
Cannot use ThreadLocalTransactionalCallable with TransactionProvider of type class org.jooq.impl.DefaultTransactionProvider
C: Functionality P: Medium R: Duplicate T: Defect
### Expected behavior and actual behavior: ### Steps to reproduce the problem: I built the DefaultDSLContext directly from `DataSource`, but when I use dslContext.TransactionalResult(...),it give me this. ![image](https://user-images.githubusercontent.com/501740/33218492-1d28fb04-d178-11e7-93f3-84c1d770fff0.png) ### Versions: - jOOQ: 3.10.1 - Java: 1.8 - Database (include vendor): Mysql - JDBC Driver (include name if inofficial driver):
1.0
Cannot use ThreadLocalTransactionalCallable with TransactionProvider of type class org.jooq.impl.DefaultTransactionProvider - ### Expected behavior and actual behavior: ### Steps to reproduce the problem: I built the DefaultDSLContext directly from `DataSource`, but when I use dslContext.TransactionalResult(...),it give me this. ![image](https://user-images.githubusercontent.com/501740/33218492-1d28fb04-d178-11e7-93f3-84c1d770fff0.png) ### Versions: - jOOQ: 3.10.1 - Java: 1.8 - Database (include vendor): Mysql - JDBC Driver (include name if inofficial driver):
non_process
cannot use threadlocaltransactionalcallable with transactionprovider of type class org jooq impl defaulttransactionprovider expected behavior and actual behavior steps to reproduce the problem i built the defaultdslcontext directly from datasource but when i use dslcontext transactionalresult it give me this versions jooq java database include vendor mysql jdbc driver include name if inofficial driver
0
3,411
2,610,062,162
IssuesEvent
2015-02-26 18:18:17
chrsmith/jsjsj122
https://api.github.com/repos/chrsmith/jsjsj122
opened
路桥不孕不育检查项目及费用
auto-migrated Priority-Medium Type-Defect
``` 路桥不孕不育检查项目及费用【台州五洲生殖医院】24小时健 康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址: 台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104� ��108、118、198及椒江一金清公交车直达枫南小区,乘坐107、105 、109、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:33
1.0
路桥不孕不育检查项目及费用 - ``` 路桥不孕不育检查项目及费用【台州五洲生殖医院】24小时健 康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址: 台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104� ��108、118、198及椒江一金清公交车直达枫南小区,乘坐107、105 、109、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:33
non_process
路桥不孕不育检查项目及费用 路桥不孕不育检查项目及费用【台州五洲生殖医院】 康咨询热线 微信号tzwzszyy 医院地址 (枫南大转盘旁)乘车线路 � �� 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
0
12,751
15,109,749,852
IssuesEvent
2021-02-08 18:15:58
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
multi-job output variable explaination misses that you need to depend on the source job
Pri2 devops-cicd-process/tech devops/prod doc-enhancement help wanted ready-to-doc
There is something that is not documented here. Multi-job output variable can only be used if the Job depends on the Source Job. So the example provided in the documentation works as B depends on A. But if you create a C depends on B and B depends on A you will see that the output variables cannot be retreived by dependencies. I would think the same occurs for stagedependencies. Bye Robert # Test Code (B will echo Variable from A and C will not echo Variable from A): jobs: # Set an output variable from job A - job: A pool: vmImage: 'vs2017-win2016' steps: - powershell: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the value" name: setvarStep - script: echo $(setvarStep.myOutputVar) name: echovar # Map the variable into job B - job: B dependsOn: A pool: vmImage: 'ubuntu-18.04' variables: myVarFromJobA: $[ dependencies.A.outputs['setvarStep.myOutputVar'] ] # map in the variable # remember, expressions require single quotes steps: - script: echo $(myVarFromJobA) name: echovar # Map the variable into job C - job: C dependsOn: B pool: vmImage: 'ubuntu-18.04' variables: myVarFromJobA: $[ dependencies.A.outputs['setvarStep.myOutputVar'] ] # map in the variable # remember, expressions require single quotes steps: - script: echo $(myVarFromJobA) name: echovar --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
multi-job output variable explaination misses that you need to depend on the source job - There is something that is not documented here. Multi-job output variable can only be used if the Job depends on the Source Job. So the example provided in the documentation works as B depends on A. But if you create a C depends on B and B depends on A you will see that the output variables cannot be retreived by dependencies. I would think the same occurs for stagedependencies. Bye Robert # Test Code (B will echo Variable from A and C will not echo Variable from A): jobs: # Set an output variable from job A - job: A pool: vmImage: 'vs2017-win2016' steps: - powershell: echo "##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the value" name: setvarStep - script: echo $(setvarStep.myOutputVar) name: echovar # Map the variable into job B - job: B dependsOn: A pool: vmImage: 'ubuntu-18.04' variables: myVarFromJobA: $[ dependencies.A.outputs['setvarStep.myOutputVar'] ] # map in the variable # remember, expressions require single quotes steps: - script: echo $(myVarFromJobA) name: echovar # Map the variable into job C - job: C dependsOn: B pool: vmImage: 'ubuntu-18.04' variables: myVarFromJobA: $[ dependencies.A.outputs['setvarStep.myOutputVar'] ] # map in the variable # remember, expressions require single quotes steps: - script: echo $(myVarFromJobA) name: echovar --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
multi job output variable explaination misses that you need to depend on the source job there is something that is not documented here multi job output variable can only be used if the job depends on the source job so the example provided in the documentation works as b depends on a but if you create a c depends on b and b depends on a you will see that the output variables cannot be retreived by dependencies i would think the same occurs for stagedependencies bye robert test code b will echo variable from a and c will not echo variable from a jobs set an output variable from job a job a pool vmimage steps powershell echo vso this is the value name setvarstep script echo setvarstep myoutputvar name echovar map the variable into job b job b dependson a pool vmimage ubuntu variables myvarfromjoba map in the variable remember expressions require single quotes steps script echo myvarfromjoba name echovar map the variable into job c job c dependson b pool vmimage ubuntu variables myvarfromjoba map in the variable remember expressions require single quotes steps script echo myvarfromjoba name echovar document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
54,500
7,889,960,650
IssuesEvent
2018-06-28 07:10:00
symfony/symfony-docs
https://api.github.com/repos/symfony/symfony-docs
closed
Document that console.command tag doesn’t support command aliases
Console Missing Documentation hasPR
I just noticed that using the `console.command` tag command aliases set via `setAliases` in `configure` are not taken into account. This should be documented on the `console/commands_as_services.rst` / http://symfony.com/doc/current/console/commands_as_services.html page. Or do you think that’s a bug of Symfony? If so [Symfony\Component\Console\DependencyInjection\AddConsoleCommandPass](https://github.com/symfony/symfony/blob/master/src/Symfony/Component/Console/DependencyInjection/AddConsoleCommandPass.php) needs to call `getAliases` of each command instance. cc @bcremer @chalasr @nicolas-grekas
1.0
Document that console.command tag doesn’t support command aliases - I just noticed that using the `console.command` tag command aliases set via `setAliases` in `configure` are not taken into account. This should be documented on the `console/commands_as_services.rst` / http://symfony.com/doc/current/console/commands_as_services.html page. Or do you think that’s a bug of Symfony? If so [Symfony\Component\Console\DependencyInjection\AddConsoleCommandPass](https://github.com/symfony/symfony/blob/master/src/Symfony/Component/Console/DependencyInjection/AddConsoleCommandPass.php) needs to call `getAliases` of each command instance. cc @bcremer @chalasr @nicolas-grekas
non_process
document that console command tag doesn’t support command aliases i just noticed that using the console command tag command aliases set via setaliases in configure are not taken into account this should be documented on the console commands as services rst page or do you think that’s a bug of symfony if so needs to call getaliases of each command instance cc bcremer chalasr nicolas grekas
0
5,606
8,468,548,158
IssuesEvent
2018-10-23 20:04:12
GoogleCloudPlatform/golang-samples
https://api.github.com/repos/GoogleCloudPlatform/golang-samples
opened
testing: don't set github status for child jobs
type: process
Kokoro's github status should only be set for the parent job (i.e., whether all child jobs succeeded). Currently, each child job sets a github status, and then after the last one completes, the parent job sets the status again. This shows the status as green in the following situation: Job 1: green Job 2: green Job 3: red Parent job: red Until Job 3 finishes, the status is set to green.
1.0
testing: don't set github status for child jobs - Kokoro's github status should only be set for the parent job (i.e., whether all child jobs succeeded). Currently, each child job sets a github status, and then after the last one completes, the parent job sets the status again. This shows the status as green in the following situation: Job 1: green Job 2: green Job 3: red Parent job: red Until Job 3 finishes, the status is set to green.
process
testing don t set github status for child jobs kokoro s github status should only be set for the parent job i e whether all child jobs succeeded currently each child job sets a github status and then after the last one completes the parent job sets the status again this shows the status as green in the following situation job green job green job red parent job red until job finishes the status is set to green
1
17,987
24,008,840,715
IssuesEvent
2022-09-14 16:55:03
OpenDataScotland/the_od_bods
https://api.github.com/repos/OpenDataScotland/the_od_bods
closed
Fix: CKAN PageURL to use /dataset instead of /package
bug data processing back end
Thanks to Will and Dom for picking this up on slack ([extracted 14 Sep](https://opendatascotland.slack.com/archives/C02HEHDL8AY/p1662916320398039)): > Will: This looks great. One observation though is that all the urls to our CKAN resource are incorrect and don't work currently. The urls should be https://data.spatialhub.scot/dataset/........ and not https://data.spatialhub.scotpackage/.......
1.0
Fix: CKAN PageURL to use /dataset instead of /package - Thanks to Will and Dom for picking this up on slack ([extracted 14 Sep](https://opendatascotland.slack.com/archives/C02HEHDL8AY/p1662916320398039)): > Will: This looks great. One observation though is that all the urls to our CKAN resource are incorrect and don't work currently. The urls should be https://data.spatialhub.scot/dataset/........ and not https://data.spatialhub.scotpackage/.......
process
fix ckan pageurl to use dataset instead of package thanks to will and dom for picking this up on slack will this looks great one observation though is that all the urls to our ckan resource are incorrect and don t work currently the urls should be and not
1
5,054
7,860,887,261
IssuesEvent
2018-06-21 21:34:08
StrikeNP/trac_test
https://api.github.com/repos/StrikeNP/trac_test
closed
Add a function to convert.m to changed a pressure profile into altitude (Trac #4)
Migrated from Trac enhancement fasching@uwm.edu post_processing
Add a function to convert.m to changed a pressure profile into altitude. This would be useful for cases that do not specify things in terms of altitude. Attachments: [plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff) [plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff) [plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff) [plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff) [plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff) [plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff) [plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff) [plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff) [plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff) Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/4 ```json { "status": "closed", "changetime": "2009-05-16T10:07:24", "description": "Add a function to convert.m to changed a pressure profile into altitude. This would be useful for cases that do not specify things in terms of altitude.", "reporter": "fasching@uwm.edu", "cc": "", "resolution": "Verified by V. Larson", "_ts": "1242468444000000", "component": "post_processing", "summary": "Add a function to convert.m to changed a pressure profile into altitude", "priority": "minor", "keywords": "conversion, MATLAB", "time": "2009-05-01T21:20:08", "milestone": "", "owner": "fasching@uwm.edu", "type": "enhancement" } ```
1.0
Add a function to convert.m to changed a pressure profile into altitude (Trac #4) - Add a function to convert.m to changed a pressure profile into altitude. This would be useful for cases that do not specify things in terms of altitude. Attachments: [plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff) [plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff) [plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff) [plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff) [plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff) [plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff) [plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff) [plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff) [plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff) Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/4 ```json { "status": "closed", "changetime": "2009-05-16T10:07:24", "description": "Add a function to convert.m to changed a pressure profile into altitude. This would be useful for cases that do not specify things in terms of altitude.", "reporter": "fasching@uwm.edu", "cc": "", "resolution": "Verified by V. Larson", "_ts": "1242468444000000", "component": "post_processing", "summary": "Add a function to convert.m to changed a pressure profile into altitude", "priority": "minor", "keywords": "conversion, MATLAB", "time": "2009-05-01T21:20:08", "milestone": "", "owner": "fasching@uwm.edu", "type": "enhancement" } ```
process
add a function to convert m to changed a pressure profile into altitude trac add a function to convert m to changed a pressure profile into altitude this would be useful for cases that do not specify things in terms of altitude attachments migrated from json status closed changetime description add a function to convert m to changed a pressure profile into altitude this would be useful for cases that do not specify things in terms of altitude reporter fasching uwm edu cc resolution verified by v larson ts component post processing summary add a function to convert m to changed a pressure profile into altitude priority minor keywords conversion matlab time milestone owner fasching uwm edu type enhancement
1
16,731
21,893,071,230
IssuesEvent
2022-05-20 05:20:25
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Algorithm output node is placed half outsize model canvas
Feedback Processing Regression Bug
### What is the bug or the crash? When adding a "final" Model Output to an algorithm, a new green node will be placed on the model canvas. This node gets placed in the top left corner, half outside the visible model canvas. I use "asd" as name here: ![image](https://user-images.githubusercontent.com/7661092/162395188-6024b8b6-44c0-4d1e-82bd-98bf660ef532.png) # master (dd6e83b2fa6) ![image](https://user-images.githubusercontent.com/7661092/162394808-4aec1aef-68e9-48ed-a892-e3bac4aec3b0.png) # In 3.16 and 3.24 this is fine ![image](https://user-images.githubusercontent.com/7661092/162394843-27d2160f-0059-48ec-a8ce-cbcf6d9751d2.png) ### Steps to reproduce the issue 1. Add a Model Output to a model algorithm 2. Look at where the resulting node is placed ### Versions <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd"> <html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css"> p, li { white-space: pre-wrap; } </style></head><body> QGIS version | 3.24.1-Tisler | QGIS code branch | Release 3.24 -- | -- | -- | -- Qt version | 5.15.3 Python version | 3.10.4 Compiled against GDAL/OGR | 3.4.0 | Running against GDAL/OGR | 3.5.0dev-673993b9f3-dirty PROJ version | 8.2.0 EPSG Registry database version | v10.038 (2021-10-21) GEOS version | 3.9.1-CAPI-1.14.2 SQLite version | 3.38.2 PDAL version | 2.4.0 PostgreSQL client version | unknown SpatiaLite version | 5.0.1 QWT version | 6.2.0 QScintilla2 version | 2.13.2 OS version | Arch Linux </body></html> ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [ ] I tried with a new QGIS profile ### Additional context I compiled master myself. 3.16 is from conda-forge. 3.24 is https://archlinux.org/packages/community/x86_64/qgis/. The same profile was used for all of them. Version information for the 3.24 one: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd"> <html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css"> p, li { white-space: pre-wrap; } </style></head><body> QGIS version | 3.24.1-Tisler | QGIS code branch | Release 3.24 -- | -- | -- | -- Qt version | 5.15.3 Python version | 3.10.4 Compiled against GDAL/OGR | 3.4.0 | Running against GDAL/OGR | 3.5.0dev-673993b9f3-dirty PROJ version | 8.2.0 EPSG Registry database version | v10.038 (2021-10-21) GEOS version | 3.9.1-CAPI-1.14.2 SQLite version | 3.38.2 PDAL version | 2.4.0 PostgreSQL client version | unknown SpatiaLite version | 5.0.1 QWT version | 6.2.0 QScintilla2 version | 2.13.2 OS version | Arch Linux </body></html>
1.0
Algorithm output node is placed half outsize model canvas - ### What is the bug or the crash? When adding a "final" Model Output to an algorithm, a new green node will be placed on the model canvas. This node gets placed in the top left corner, half outside the visible model canvas. I use "asd" as name here: ![image](https://user-images.githubusercontent.com/7661092/162395188-6024b8b6-44c0-4d1e-82bd-98bf660ef532.png) # master (dd6e83b2fa6) ![image](https://user-images.githubusercontent.com/7661092/162394808-4aec1aef-68e9-48ed-a892-e3bac4aec3b0.png) # In 3.16 and 3.24 this is fine ![image](https://user-images.githubusercontent.com/7661092/162394843-27d2160f-0059-48ec-a8ce-cbcf6d9751d2.png) ### Steps to reproduce the issue 1. Add a Model Output to a model algorithm 2. Look at where the resulting node is placed ### Versions <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd"> <html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css"> p, li { white-space: pre-wrap; } </style></head><body> QGIS version | 3.24.1-Tisler | QGIS code branch | Release 3.24 -- | -- | -- | -- Qt version | 5.15.3 Python version | 3.10.4 Compiled against GDAL/OGR | 3.4.0 | Running against GDAL/OGR | 3.5.0dev-673993b9f3-dirty PROJ version | 8.2.0 EPSG Registry database version | v10.038 (2021-10-21) GEOS version | 3.9.1-CAPI-1.14.2 SQLite version | 3.38.2 PDAL version | 2.4.0 PostgreSQL client version | unknown SpatiaLite version | 5.0.1 QWT version | 6.2.0 QScintilla2 version | 2.13.2 OS version | Arch Linux </body></html> ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [ ] I tried with a new QGIS profile ### Additional context I compiled master myself. 3.16 is from conda-forge. 3.24 is https://archlinux.org/packages/community/x86_64/qgis/. The same profile was used for all of them. Version information for the 3.24 one: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd"> <html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css"> p, li { white-space: pre-wrap; } </style></head><body> QGIS version | 3.24.1-Tisler | QGIS code branch | Release 3.24 -- | -- | -- | -- Qt version | 5.15.3 Python version | 3.10.4 Compiled against GDAL/OGR | 3.4.0 | Running against GDAL/OGR | 3.5.0dev-673993b9f3-dirty PROJ version | 8.2.0 EPSG Registry database version | v10.038 (2021-10-21) GEOS version | 3.9.1-CAPI-1.14.2 SQLite version | 3.38.2 PDAL version | 2.4.0 PostgreSQL client version | unknown SpatiaLite version | 5.0.1 QWT version | 6.2.0 QScintilla2 version | 2.13.2 OS version | Arch Linux </body></html>
process
algorithm output node is placed half outsize model canvas what is the bug or the crash when adding a final model output to an algorithm a new green node will be placed on the model canvas this node gets placed in the top left corner half outside the visible model canvas i use asd as name here master in and this is fine steps to reproduce the issue add a model output to a model algorithm look at where the resulting node is placed versions doctype html public dtd html en p li white space pre wrap qgis version tisler qgis code branch release qt version python version compiled against gdal ogr running against gdal ogr dirty proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version arch linux supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context i compiled master myself is from conda forge is the same profile was used for all of them version information for the one doctype html public dtd html en p li white space pre wrap qgis version tisler qgis code branch release qt version python version compiled against gdal ogr running against gdal ogr dirty proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version arch linux
1
310,110
9,485,876,402
IssuesEvent
2019-04-22 12:03:19
myin142/HealifeFit
https://api.github.com/repos/myin142/HealifeFit
opened
Search Exercises Page
medium priority
Everyone should be able to search for exercises. Filter by: - name - muscles - difficulty - equipment
1.0
Search Exercises Page - Everyone should be able to search for exercises. Filter by: - name - muscles - difficulty - equipment
non_process
search exercises page everyone should be able to search for exercises filter by name muscles difficulty equipment
0
153,769
13,526,350,657
IssuesEvent
2020-09-15 14:08:52
algoo/hapic
https://api.github.com/repos/algoo/hapic
opened
Begin user documentation
documentation
Require: https://github.com/algoo/hapic/issues/209 Hapic must have a basic documentation for users. The second step is to write it. Feed the current [basic](https://github.com/algoo/hapic/issues/209) documentation with new. The content of documentation must be: * Global presentation of Hapic (advantage/inconvenient) * How to use it * Which web framework are supported and how add one (keep simple about how to add one) Resources: * https://algoo.tracim.fr/ui/workspaces/30/contents/html-document/5771 * https://github.com/buxx/hapic_workshop
1.0
Begin user documentation - Require: https://github.com/algoo/hapic/issues/209 Hapic must have a basic documentation for users. The second step is to write it. Feed the current [basic](https://github.com/algoo/hapic/issues/209) documentation with new. The content of documentation must be: * Global presentation of Hapic (advantage/inconvenient) * How to use it * Which web framework are supported and how add one (keep simple about how to add one) Resources: * https://algoo.tracim.fr/ui/workspaces/30/contents/html-document/5771 * https://github.com/buxx/hapic_workshop
non_process
begin user documentation require hapic must have a basic documentation for users the second step is to write it feed the current documentation with new the content of documentation must be global presentation of hapic advantage inconvenient how to use it which web framework are supported and how add one keep simple about how to add one resources
0
12,631
15,016,279,165
IssuesEvent
2021-02-01 09:23:15
panther-labs/panther
https://api.github.com/repos/panther-labs/panther
closed
Failed to classify CrowdStrike `aid_master` event
p1 team:data processing
### Describe the bug The CrowdStrike `aid_master` can be sent through FDR and contains info to relate the `aid` (sensor id) field to `ComputerName`. The schema of the event doesn't match our `CrowdStrike.Unknown` schema, so we need a separate parser for it (or update the `CrowdStrike.Unknown` schema). ### Steps to reproduce I have created a CrowdStrike integration, contacted their support to enable the `aid_master` field and saw `failed to classify` in the logs. ### Expected behavior The `aid_master` event ends up in our data lake, either in the `crowdstrike_unknown` table or in a new `crowdstrike_aidmaster` table.
1.0
Failed to classify CrowdStrike `aid_master` event - ### Describe the bug The CrowdStrike `aid_master` can be sent through FDR and contains info to relate the `aid` (sensor id) field to `ComputerName`. The schema of the event doesn't match our `CrowdStrike.Unknown` schema, so we need a separate parser for it (or update the `CrowdStrike.Unknown` schema). ### Steps to reproduce I have created a CrowdStrike integration, contacted their support to enable the `aid_master` field and saw `failed to classify` in the logs. ### Expected behavior The `aid_master` event ends up in our data lake, either in the `crowdstrike_unknown` table or in a new `crowdstrike_aidmaster` table.
process
failed to classify crowdstrike aid master event describe the bug the crowdstrike aid master can be sent through fdr and contains info to relate the aid sensor id field to computername the schema of the event doesn t match our crowdstrike unknown schema so we need a separate parser for it or update the crowdstrike unknown schema steps to reproduce i have created a crowdstrike integration contacted their support to enable the aid master field and saw failed to classify in the logs expected behavior the aid master event ends up in our data lake either in the crowdstrike unknown table or in a new crowdstrike aidmaster table
1
535,712
15,697,012,070
IssuesEvent
2021-03-26 03:27:32
dietterc/SEO-ker
https://api.github.com/repos/dietterc/SEO-ker
closed
[1.4] As a user, I want the game to differentiate me from other players in the game, so I know who is who.
feature 1 high priority user story
Acceptance criteria: I am able to differentiate all the players within a game. Priority: High Estimated length: Short
1.0
[1.4] As a user, I want the game to differentiate me from other players in the game, so I know who is who. - Acceptance criteria: I am able to differentiate all the players within a game. Priority: High Estimated length: Short
non_process
as a user i want the game to differentiate me from other players in the game so i know who is who acceptance criteria i am able to differentiate all the players within a game priority high estimated length short
0
298,687
9,200,670,102
IssuesEvent
2019-03-07 17:37:02
qissue-bot/QGIS
https://api.github.com/repos/qissue-bot/QGIS
closed
Crash when closing attribute table on OS X
Category: Vectors Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
--- Author Name: **Gary Sherman** (Gary Sherman) Original Redmine Issue: 250, https://issues.qgis.org/issues/250 Original Assignee: Gary Sherman --- Load a shapefile, open the attribute table, and then click the Close button in the attribute dialog. OS X crashes after getting the SBOD (spinning beachball of death) for a minute or so. Seems to work fine on Linux. ``` Partial crash log: Exception: EXC_BAD_ACCESS (0x0001) Codes: KERN_PROTECTION_FAILURE (0x0002) at 0x000000b8 Thread 0 Crashed: 0 [[QtGui]] 0x012618e4 QWidget::testAttribute_helper(Qt::WidgetAttribute) const + 32 1 [[QtGui]] 0x01268de0 QWidgetPrivate::close_helper(QWidgetPrivate::CloseMode) + 332 2 [[QtGui]] 0x0126d9d4 QWidget::qt_metacall(QMetaObject::Call, int, void**) + 476 3 [[QtGui]] 0x014b5098 QDialog::qt_metacall(QMetaObject::Call, int, void**) + 40 4 libqgis_gui.0.dylib 0x03459898 [[QgsAttributeTableDisplay]]::qt_metacall(QMetaObject::Call, int, void**) + 40 (qgsattributetabledisplay.moc.cpp:76) 5 [[QtCore]] 0x01089d58 QMetaObject::activate(QObject*, int, int, void**) + 1152 6 [[QtGui]] 0x014b0634 QAbstractButton::clicked(bool) + 76 7 [[QtGui]] 0x013b4228 QAbstractButtonPrivate::click() + 164 8 [[QtGui]] 0x013b4450 QAbstractButton::mouseReleaseEvent(QMouseEvent*) + 112 9 [[QtGui]] 0x0126af44 QWidget::event(QEvent*) + 752 10 [[QtGui]] 0x012375c0 QApplicationPrivate::notify_helper(QObject*, QEvent*) + 384 11 [[QtGui]] 0x0123dba0 QApplication::notify(QObject*, QEvent*) + 2064 12 [[QtGui]] 0x01275f9c QApplicationPrivate::globalEventProcessor(OpaqueEventHandlerCallRef*, [[OpaqueEventRef]]*, void*) + 6652 ```
1.0
Crash when closing attribute table on OS X - --- Author Name: **Gary Sherman** (Gary Sherman) Original Redmine Issue: 250, https://issues.qgis.org/issues/250 Original Assignee: Gary Sherman --- Load a shapefile, open the attribute table, and then click the Close button in the attribute dialog. OS X crashes after getting the SBOD (spinning beachball of death) for a minute or so. Seems to work fine on Linux. ``` Partial crash log: Exception: EXC_BAD_ACCESS (0x0001) Codes: KERN_PROTECTION_FAILURE (0x0002) at 0x000000b8 Thread 0 Crashed: 0 [[QtGui]] 0x012618e4 QWidget::testAttribute_helper(Qt::WidgetAttribute) const + 32 1 [[QtGui]] 0x01268de0 QWidgetPrivate::close_helper(QWidgetPrivate::CloseMode) + 332 2 [[QtGui]] 0x0126d9d4 QWidget::qt_metacall(QMetaObject::Call, int, void**) + 476 3 [[QtGui]] 0x014b5098 QDialog::qt_metacall(QMetaObject::Call, int, void**) + 40 4 libqgis_gui.0.dylib 0x03459898 [[QgsAttributeTableDisplay]]::qt_metacall(QMetaObject::Call, int, void**) + 40 (qgsattributetabledisplay.moc.cpp:76) 5 [[QtCore]] 0x01089d58 QMetaObject::activate(QObject*, int, int, void**) + 1152 6 [[QtGui]] 0x014b0634 QAbstractButton::clicked(bool) + 76 7 [[QtGui]] 0x013b4228 QAbstractButtonPrivate::click() + 164 8 [[QtGui]] 0x013b4450 QAbstractButton::mouseReleaseEvent(QMouseEvent*) + 112 9 [[QtGui]] 0x0126af44 QWidget::event(QEvent*) + 752 10 [[QtGui]] 0x012375c0 QApplicationPrivate::notify_helper(QObject*, QEvent*) + 384 11 [[QtGui]] 0x0123dba0 QApplication::notify(QObject*, QEvent*) + 2064 12 [[QtGui]] 0x01275f9c QApplicationPrivate::globalEventProcessor(OpaqueEventHandlerCallRef*, [[OpaqueEventRef]]*, void*) + 6652 ```
non_process
crash when closing attribute table on os x author name gary sherman gary sherman original redmine issue original assignee gary sherman load a shapefile open the attribute table and then click the close button in the attribute dialog os x crashes after getting the sbod spinning beachball of death for a minute or so seems to work fine on linux partial crash log exception exc bad access codes kern protection failure at thread crashed qwidget testattribute helper qt widgetattribute const qwidgetprivate close helper qwidgetprivate closemode qwidget qt metacall qmetaobject call int void qdialog qt metacall qmetaobject call int void libqgis gui dylib qt metacall qmetaobject call int void qgsattributetabledisplay moc cpp qmetaobject activate qobject int int void qabstractbutton clicked bool qabstractbuttonprivate click qabstractbutton mousereleaseevent qmouseevent qwidget event qevent qapplicationprivate notify helper qobject qevent qapplication notify qobject qevent qapplicationprivate globaleventprocessor opaqueeventhandlercallref void
0
13,547
16,089,670,627
IssuesEvent
2021-04-26 15:14:42
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
How can I setup stage condition in the devops release pipeline UI portal?
Pri2 cba devops-cicd-process/tech devops/prod support-request
How can I setup stage condition in the devops release pipeline UI portal? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: d322215c-8025-4f21-0700-7dfa7dc5c46e * Version Independent ID: 141fcdbb-8394-525b-bb29-eff9a693a9c4 * Content: [Stages in Azure Pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/stages.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/stages.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
How can I setup stage condition in the devops release pipeline UI portal? - How can I setup stage condition in the devops release pipeline UI portal? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: d322215c-8025-4f21-0700-7dfa7dc5c46e * Version Independent ID: 141fcdbb-8394-525b-bb29-eff9a693a9c4 * Content: [Stages in Azure Pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/stages.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/stages.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
how can i setup stage condition in the devops release pipeline ui portal how can i setup stage condition in the devops release pipeline ui portal document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
343,925
30,700,493,269
IssuesEvent
2023-07-26 22:42:04
rancher-sandbox/rancher-desktop
https://api.github.com/repos/rancher-sandbox/rancher-desktop
closed
`yarn test:e2e e2e/FILE ...` is broken on Windows
kind/bug priority/0 component/unit-test
### Actual Behavior When I run `yarn test:e2e`, it runs the tests, but when I specify one or more files it starts running, but then shuts down saying 'Error: No tests found` ### Steps to Reproduce ``` yarn test:e2e .\e2e\main.e2e.spec.ts ``` (or any number of command-line files > 0 ### Rancher Desktop Version 1.9.1dev ### What operating system are you using? Windows ### Operating System / Build Version Windows 10
1.0
`yarn test:e2e e2e/FILE ...` is broken on Windows - ### Actual Behavior When I run `yarn test:e2e`, it runs the tests, but when I specify one or more files it starts running, but then shuts down saying 'Error: No tests found` ### Steps to Reproduce ``` yarn test:e2e .\e2e\main.e2e.spec.ts ``` (or any number of command-line files > 0 ### Rancher Desktop Version 1.9.1dev ### What operating system are you using? Windows ### Operating System / Build Version Windows 10
non_process
yarn test file is broken on windows actual behavior when i run yarn test it runs the tests but when i specify one or more files it starts running but then shuts down saying error no tests found steps to reproduce yarn test main spec ts or any number of command line files rancher desktop version what operating system are you using windows operating system build version windows
0
240,097
26,254,320,464
IssuesEvent
2023-01-05 22:32:39
MValle21/conduktor
https://api.github.com/repos/MValle21/conduktor
opened
CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz
security vulnerability
## CVE-2021-23424 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary> <p>An elegant lib that converts the chalked (ANSI) text to HTML.</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p> <p>Path to dependency file: /client/package.json</p> <p>Path to vulnerable library: /client/node_modules/ansi-html/package.json</p> <p> Dependency Hierarchy: - webpack-dev-server-3.1.11.tgz (Root Library) - :x: **ansi-html-0.0.7.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/MValle21/conduktor/commit/2afe18be24d4f77034e10f011e2885a5f1f53cb9">2afe18be24d4f77034e10f011e2885a5f1f53cb9</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time. <p>Publish Date: 2021-08-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23424>CVE-2021-23424</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-23424">https://nvd.nist.gov/vuln/detail/CVE-2021-23424</a></p> <p>Release Date: 2021-08-18</p> <p>Fix Resolution (ansi-html): 0.0.8</p> <p>Direct dependency fix Resolution (webpack-dev-server): 3.11.3</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz - ## CVE-2021-23424 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary> <p>An elegant lib that converts the chalked (ANSI) text to HTML.</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p> <p>Path to dependency file: /client/package.json</p> <p>Path to vulnerable library: /client/node_modules/ansi-html/package.json</p> <p> Dependency Hierarchy: - webpack-dev-server-3.1.11.tgz (Root Library) - :x: **ansi-html-0.0.7.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/MValle21/conduktor/commit/2afe18be24d4f77034e10f011e2885a5f1f53cb9">2afe18be24d4f77034e10f011e2885a5f1f53cb9</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time. <p>Publish Date: 2021-08-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23424>CVE-2021-23424</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-23424">https://nvd.nist.gov/vuln/detail/CVE-2021-23424</a></p> <p>Release Date: 2021-08-18</p> <p>Fix Resolution (ansi-html): 0.0.8</p> <p>Direct dependency fix Resolution (webpack-dev-server): 3.11.3</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_process
cve high detected in ansi html tgz cve high severity vulnerability vulnerable library ansi html tgz an elegant lib that converts the chalked ansi text to html library home page a href path to dependency file client package json path to vulnerable library client node modules ansi html package json dependency hierarchy webpack dev server tgz root library x ansi html tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects all versions of package ansi html if an attacker provides a malicious string it will get stuck processing the input for an extremely long time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi html direct dependency fix resolution webpack dev server rescue worker helmet automatic remediation is available for this issue
0
10,174
13,044,162,759
IssuesEvent
2020-07-29 03:47:35
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `RowSig` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `RowSig` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `RowSig` from TiDB - ## Description Port the scalar function `RowSig` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function rowsig from tidb description port the scalar function rowsig from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
1
129,394
27,456,780,939
IssuesEvent
2023-03-02 22:06:55
boredzo/impluse-hfs
https://api.github.com/repos/boredzo/impluse-hfs
opened
Warnings cleanup
code-hygiene
There are, as of [7f49687], seven warnings. These should be fixed up by whatever means so that the project builds with zero warnings.
1.0
Warnings cleanup - There are, as of [7f49687], seven warnings. These should be fixed up by whatever means so that the project builds with zero warnings.
non_process
warnings cleanup there are as of seven warnings these should be fixed up by whatever means so that the project builds with zero warnings
0
7,930
11,104,927,380
IssuesEvent
2019-12-17 08:47:37
zammad/zammad
https://api.github.com/repos/zammad/zammad
closed
Error while fetching email from inbox will block processing of other mails in inbox
bug mail processing prioritized by payment verified
* Used Zammad version: 3.1.x * Installation method (source, package, ..): any * Operating system: any * Database + version: any * Elasticsearch version: any * Browser + version: any *Ticket #: 1053593 ### Expected behavior: * Fetch mails via IMAP/POP3 channel backend * Error occurs while fetching email * Log expressive error to log / channel / maintenance endpoit * Process with other emails ### Actual behavior: * Fetch mails via IMAP/POP3 channel backend * Error occurs while fetching email * Log exact error to log * No further fetching/processing of other mails in inbox ### Steps to reproduce the behavior: * Have an unfetchable mail in your inbox * See a log message like: ``` I, [2019-09-16T19:07:44.077396 #22185-70321870965020] INFO -- : fetching imap (mail.example.com/info@example.com port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true) I, [2019-09-16T19:07:44.351265 #22185-70321870965020] INFO -- : - message 1/80 E, [2019-09-16T19:07:44.446517 #22185-70321870965020] ERROR -- : Can't use Channel::Driver::Imap: #<Net::IMAP::ResponseParseError: unknown token - "\"Jetzt"> E, [2019-09-16T19:07:44.446559 #22185-70321870965020] ERROR -- : unknown token - "\"Jetzt" (Net::IMAP::ResponseParseError) /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3492:in `parse_error' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3437:in `next_token' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3354:in `lookahead' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3252:in `nstring' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2399:in `envelope' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2386:in `envelope_data' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2359:in `msg_att' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2339:in `numeric_response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2281:in `response_untagged' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2252:in `response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2178:in `parse' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1242:in `get_response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1145:in `receive_responses' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1120:in `block in initialize' /usr/local/rvm/gems/ruby-2.5.5/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context' ``` Or: ``` I, [2019-09-24T10:38:48.837126 #29702-19506700] INFO -- : - message 1/2 E, [2019-09-24T10:38:48.889973 #29702-19506700] ERROR -- : Can't use Channel::Driver::Imap: #<Net::IMAP::ResponseParseError: unknown token - "\"RE:"> E, [2019-09-24T10:38:48.890016 #29702-19506700] ERROR -- : unknown token - "\"RE:" (Net::IMAP::ResponseParseError) /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3492:in `parse_error' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3437:in `next_token' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3354:in `lookahead' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3252:in `nstring' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2399:in `envelope' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2386:in `envelope_data' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2359:in `msg_att' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2339:in `numeric_response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2281:in `response_untagged' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2252:in `response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2178:in `parse' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1242:in `get_response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1145:in `receive_responses' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1120:in `block in initialize' /usr/local/rvm/gems/ruby-2.5.5/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context' I, [2019-09-24T10:38:49.126744 #29702-19506700] INFO -- : fetching imap (example.com/info@example.com port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true) I, [2019-09-24T10:38:49.397862 #29702-19506700] INFO -- : - no message ``` * See mails piling up in your mailbox There is an example mail in T#1053593. Yes I'm sure this is a bug and no feature request or a general question.
1.0
Error while fetching email from inbox will block processing of other mails in inbox - * Used Zammad version: 3.1.x * Installation method (source, package, ..): any * Operating system: any * Database + version: any * Elasticsearch version: any * Browser + version: any *Ticket #: 1053593 ### Expected behavior: * Fetch mails via IMAP/POP3 channel backend * Error occurs while fetching email * Log expressive error to log / channel / maintenance endpoit * Process with other emails ### Actual behavior: * Fetch mails via IMAP/POP3 channel backend * Error occurs while fetching email * Log exact error to log * No further fetching/processing of other mails in inbox ### Steps to reproduce the behavior: * Have an unfetchable mail in your inbox * See a log message like: ``` I, [2019-09-16T19:07:44.077396 #22185-70321870965020] INFO -- : fetching imap (mail.example.com/info@example.com port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true) I, [2019-09-16T19:07:44.351265 #22185-70321870965020] INFO -- : - message 1/80 E, [2019-09-16T19:07:44.446517 #22185-70321870965020] ERROR -- : Can't use Channel::Driver::Imap: #<Net::IMAP::ResponseParseError: unknown token - "\"Jetzt"> E, [2019-09-16T19:07:44.446559 #22185-70321870965020] ERROR -- : unknown token - "\"Jetzt" (Net::IMAP::ResponseParseError) /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3492:in `parse_error' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3437:in `next_token' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3354:in `lookahead' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3252:in `nstring' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2399:in `envelope' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2386:in `envelope_data' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2359:in `msg_att' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2339:in `numeric_response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2281:in `response_untagged' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2252:in `response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2178:in `parse' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1242:in `get_response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1145:in `receive_responses' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1120:in `block in initialize' /usr/local/rvm/gems/ruby-2.5.5/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context' ``` Or: ``` I, [2019-09-24T10:38:48.837126 #29702-19506700] INFO -- : - message 1/2 E, [2019-09-24T10:38:48.889973 #29702-19506700] ERROR -- : Can't use Channel::Driver::Imap: #<Net::IMAP::ResponseParseError: unknown token - "\"RE:"> E, [2019-09-24T10:38:48.890016 #29702-19506700] ERROR -- : unknown token - "\"RE:" (Net::IMAP::ResponseParseError) /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3492:in `parse_error' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3437:in `next_token' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3354:in `lookahead' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:3252:in `nstring' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2399:in `envelope' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2386:in `envelope_data' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2359:in `msg_att' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2339:in `numeric_response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2281:in `response_untagged' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2252:in `response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:2178:in `parse' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1242:in `get_response' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1145:in `receive_responses' /usr/local/rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/net/imap.rb:1120:in `block in initialize' /usr/local/rvm/gems/ruby-2.5.5/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context' I, [2019-09-24T10:38:49.126744 #29702-19506700] INFO -- : fetching imap (example.com/info@example.com port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true) I, [2019-09-24T10:38:49.397862 #29702-19506700] INFO -- : - no message ``` * See mails piling up in your mailbox There is an example mail in T#1053593. Yes I'm sure this is a bug and no feature request or a general question.
process
error while fetching email from inbox will block processing of other mails in inbox used zammad version x installation method source package any operating system any database version any elasticsearch version any browser version any ticket expected behavior fetch mails via imap channel backend error occurs while fetching email log expressive error to log channel maintenance endpoit process with other emails actual behavior fetch mails via imap channel backend error occurs while fetching email log exact error to log no further fetching processing of other mails in inbox steps to reproduce the behavior have an unfetchable mail in your inbox see a log message like i info fetching imap mail example com info example com port ssl true starttls false folder inbox keep on server true i info message e error can t use channel driver imap e error unknown token jetzt net imap responseparseerror usr local rvm rubies ruby lib ruby net imap rb in parse error usr local rvm rubies ruby lib ruby net imap rb in next token usr local rvm rubies ruby lib ruby net imap rb in lookahead usr local rvm rubies ruby lib ruby net imap rb in nstring usr local rvm rubies ruby lib ruby net imap rb in envelope usr local rvm rubies ruby lib ruby net imap rb in envelope data usr local rvm rubies ruby lib ruby net imap rb in msg att usr local rvm rubies ruby lib ruby net imap rb in numeric response usr local rvm rubies ruby lib ruby net imap rb in response untagged usr local rvm rubies ruby lib ruby net imap rb in response usr local rvm rubies ruby lib ruby net imap rb in parse usr local rvm rubies ruby lib ruby net imap rb in get response usr local rvm rubies ruby lib ruby net imap rb in receive responses usr local rvm rubies ruby lib ruby net imap rb in block in initialize usr local rvm gems ruby gems logging lib logging diagnostic context rb in block in create with logging context or i info message e error can t use channel driver imap e error unknown token re net imap responseparseerror usr local rvm rubies ruby lib ruby net imap rb in parse error usr local rvm rubies ruby lib ruby net imap rb in next token usr local rvm rubies ruby lib ruby net imap rb in lookahead usr local rvm rubies ruby lib ruby net imap rb in nstring usr local rvm rubies ruby lib ruby net imap rb in envelope usr local rvm rubies ruby lib ruby net imap rb in envelope data usr local rvm rubies ruby lib ruby net imap rb in msg att usr local rvm rubies ruby lib ruby net imap rb in numeric response usr local rvm rubies ruby lib ruby net imap rb in response untagged usr local rvm rubies ruby lib ruby net imap rb in response usr local rvm rubies ruby lib ruby net imap rb in parse usr local rvm rubies ruby lib ruby net imap rb in get response usr local rvm rubies ruby lib ruby net imap rb in receive responses usr local rvm rubies ruby lib ruby net imap rb in block in initialize usr local rvm gems ruby gems logging lib logging diagnostic context rb in block in create with logging context i info fetching imap example com info example com port ssl true starttls false folder inbox keep on server true i info no message see mails piling up in your mailbox there is an example mail in t yes i m sure this is a bug and no feature request or a general question
1
8,412
11,578,331,934
IssuesEvent
2020-02-21 15:45:46
material-components/material-components-ios
https://api.github.com/repos/material-components/material-components-ios
closed
[TextControls] Internal issue: b/148450171
[TextControls] type:Process
This was filed as an internal issue. If you are a Googler, please visit [b/148450171](http://b/148450171) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/148450171](http://b/148450171)
1.0
[TextControls] Internal issue: b/148450171 - This was filed as an internal issue. If you are a Googler, please visit [b/148450171](http://b/148450171) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/148450171](http://b/148450171)
process
internal issue b this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug
1
2,312
5,132,979,713
IssuesEvent
2017-01-11 01:14:26
mapbox/mapbox-gl-js
https://api.github.com/repos/mapbox/mapbox-gl-js
opened
Remove plugin build artifacts from this repository
release process
Including the plugin build artifacts in this repository isn't consistent with our general policies around checking in build artifacts or very convenient for plugin authors. We should experiment with other workflows such as - requiring plugins be published to https://unpkg.com/ - requiring plugins handle their own publishing to s3 - creating a script that downloads, builds, and publishes plugins
1.0
Remove plugin build artifacts from this repository - Including the plugin build artifacts in this repository isn't consistent with our general policies around checking in build artifacts or very convenient for plugin authors. We should experiment with other workflows such as - requiring plugins be published to https://unpkg.com/ - requiring plugins handle their own publishing to s3 - creating a script that downloads, builds, and publishes plugins
process
remove plugin build artifacts from this repository including the plugin build artifacts in this repository isn t consistent with our general policies around checking in build artifacts or very convenient for plugin authors we should experiment with other workflows such as requiring plugins be published to requiring plugins handle their own publishing to creating a script that downloads builds and publishes plugins
1
180,490
14,768,292,822
IssuesEvent
2021-01-10 11:21:37
jgroth/kompendium
https://api.github.com/repos/jgroth/kompendium
closed
Add guide about how to use remark-admonitions syntax
documentation
for users of Kompendium, it's good if they know we're using https://github.com/elviswolcott/remark-admonitions and how they can improve the documentations using their syntax
1.0
Add guide about how to use remark-admonitions syntax - for users of Kompendium, it's good if they know we're using https://github.com/elviswolcott/remark-admonitions and how they can improve the documentations using their syntax
non_process
add guide about how to use remark admonitions syntax for users of kompendium it s good if they know we re using and how they can improve the documentations using their syntax
0
167,502
26,510,258,289
IssuesEvent
2023-01-18 16:34:42
microsoft/PowerToys
https://api.github.com/repos/microsoft/PowerToys
closed
Windows Task Bar
Product-Tweak UI Design Needs-Triage
### Description of the new feature / enhancement Can the team please add feature that centralizes the apps on the taskbar, but not the start menu. I'm not a fan of how Microsoft centralized the start menu with apps because it is much easier and faster to directly move your mouse to the bottom left. Plus the start menu plays an integral part of the OS that it should have it own "zone" of the taskbar separate from the apps. Also moving the weather icon to left side of the taskbar and sitting to the right of the start menu would balance out the taskbar better so it is not heavily icon loaded to the right side. ### Scenario when this would be used? It's not necessarily a useful feature that improves productivity, as it is more of a cosmetic one. But it would add a cleaner, modern, and balanced look to the taskbar. ### Supporting information _No response_
1.0
Windows Task Bar - ### Description of the new feature / enhancement Can the team please add feature that centralizes the apps on the taskbar, but not the start menu. I'm not a fan of how Microsoft centralized the start menu with apps because it is much easier and faster to directly move your mouse to the bottom left. Plus the start menu plays an integral part of the OS that it should have it own "zone" of the taskbar separate from the apps. Also moving the weather icon to left side of the taskbar and sitting to the right of the start menu would balance out the taskbar better so it is not heavily icon loaded to the right side. ### Scenario when this would be used? It's not necessarily a useful feature that improves productivity, as it is more of a cosmetic one. But it would add a cleaner, modern, and balanced look to the taskbar. ### Supporting information _No response_
non_process
windows task bar description of the new feature enhancement can the team please add feature that centralizes the apps on the taskbar but not the start menu i m not a fan of how microsoft centralized the start menu with apps because it is much easier and faster to directly move your mouse to the bottom left plus the start menu plays an integral part of the os that it should have it own zone of the taskbar separate from the apps also moving the weather icon to left side of the taskbar and sitting to the right of the start menu would balance out the taskbar better so it is not heavily icon loaded to the right side scenario when this would be used it s not necessarily a useful feature that improves productivity as it is more of a cosmetic one but it would add a cleaner modern and balanced look to the taskbar supporting information no response
0
1,985
4,816,785,087
IssuesEvent
2016-11-04 11:19:18
woesterduolf/Mission-reisbureau
https://api.github.com/repos/woesterduolf/Mission-reisbureau
opened
Hotel kiezen pagina
Boekingsprocess priority: highest Type:Feature
**See mockup file** On the left there are all the options a customer could want to choose from regarding hotel preferences. In this example I have added the amount of stars, the rating and the distance to the nearest land mark. There are however a lot more and they are accessible by scrolling down by using the vertical scroll bar. The user can select all the options he wants, they’re not mutually exclusive. All the hotels that fit the preferences that the customer has selected, show up on the right side of the screen. Because you can’t see all the hotels at once, there is a vertical scroll bar on the right of the screen. On the background of the main screen there now is a faded image that characterizes the city. Opacity is set to 30% so the image doesn’t disturb too much. Now we see the hotels themselves. Concise information about every hotel is visible in a rectangular shaped bar. On the left side of the bar is a photo of the hotel. Above that the consumer can see the hotel’s name. To the right of that there are 4 lines of that that give the general information about the hotel including the amount of stars it has and the rating of the hotel. Next to that is a map indicating the position of the hotel. This map is clickable and will lead the consumer to Google Maps where he can look at more precise information about the area. And last, there is a button that, if clicked, will take the user to the hotel and room selection page.
1.0
Hotel kiezen pagina - **See mockup file** On the left there are all the options a customer could want to choose from regarding hotel preferences. In this example I have added the amount of stars, the rating and the distance to the nearest land mark. There are however a lot more and they are accessible by scrolling down by using the vertical scroll bar. The user can select all the options he wants, they’re not mutually exclusive. All the hotels that fit the preferences that the customer has selected, show up on the right side of the screen. Because you can’t see all the hotels at once, there is a vertical scroll bar on the right of the screen. On the background of the main screen there now is a faded image that characterizes the city. Opacity is set to 30% so the image doesn’t disturb too much. Now we see the hotels themselves. Concise information about every hotel is visible in a rectangular shaped bar. On the left side of the bar is a photo of the hotel. Above that the consumer can see the hotel’s name. To the right of that there are 4 lines of that that give the general information about the hotel including the amount of stars it has and the rating of the hotel. Next to that is a map indicating the position of the hotel. This map is clickable and will lead the consumer to Google Maps where he can look at more precise information about the area. And last, there is a button that, if clicked, will take the user to the hotel and room selection page.
process
hotel kiezen pagina see mockup file on the left there are all the options a customer could want to choose from regarding hotel preferences in this example i have added the amount of stars the rating and the distance to the nearest land mark there are however a lot more and they are accessible by scrolling down by using the vertical scroll bar the user can select all the options he wants they’re not mutually exclusive all the hotels that fit the preferences that the customer has selected show up on the right side of the screen because you can’t see all the hotels at once there is a vertical scroll bar on the right of the screen on the background of the main screen there now is a faded image that characterizes the city opacity is set to so the image doesn’t disturb too much now we see the hotels themselves concise information about every hotel is visible in a rectangular shaped bar on the left side of the bar is a photo of the hotel above that the consumer can see the hotel’s name to the right of that there are lines of that that give the general information about the hotel including the amount of stars it has and the rating of the hotel next to that is a map indicating the position of the hotel this map is clickable and will lead the consumer to google maps where he can look at more precise information about the area and last there is a button that if clicked will take the user to the hotel and room selection page
1
9,264
12,294,715,172
IssuesEvent
2020-05-11 01:15:59
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
malloc(): smallbin double linked list corrupted
bug duplicate log-processing on-disk
I'm getting this strange error. ``` *** Error in `goaccess': malloc(): smallbin double linked list corrupted: 0x0000000034cea660 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7fac750997e5] /lib/x86_64-linux-gnu/libc.so.6(+0x82651)[0x7fac750a4651] /lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x54)[0x7fac750a6184] /usr/lib/x86_64-linux-gnu/libtokyocabinet.so.9(tcbdbget+0x124)[0x7fac74da0d74] goaccess[0x4375db] goaccess(ht_insert_genstats+0x3e)[0x4385c0] goaccess[0x4227b5] goaccess[0x424149] goaccess[0x4242d6] goaccess[0x4245c6] goaccess[0x424859] goaccess(parse_log+0x1bb)[0x424a4a] goaccess(main+0xc5)[0x415893] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fac75042830] goaccess(_start+0x29)[0x408639] ======= Memory map: ======== 00400000-004a0000 r-xp 00000000 08:02 20714599 /usr/local/bin/goaccess 0069f000-006a0000 r--p 0009f000 08:02 20714599 /usr/local/bin/goaccess 006a0000-006aa000 rw-p 000a0000 08:02 20714599 /usr/local/bin/goaccess 01d55000-39b03000 rw-p 00000000 00:00 0 [heap] 7fac60000000-7fac60021000 rw-p 00000000 00:00 0 7fac60021000-7fac64000000 ---p 00000000 00:00 0 7fac68000000-7fac68021000 rw-p 00000000 00:00 0 7fac68021000-7fac6c000000 ---p 00000000 00:00 0 7fac6caa4000-7fac6caba000 r-xp 00000000 08:02 10097085 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fac6caba000-7fac6ccb9000 ---p 00016000 08:02 10097085 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fac6ccb9000-7fac6ccba000 rw-p 00015000 08:02 10097085 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fac6ccd8000-7fac6cd19000 rw-s 00000000 08:02 14162485 /home/sithu/goaccesscf/14m7AhArjrdb_metadata.tcb 7fac6cd19000-7fac6cd5a000 rw-s 00000000 08:02 14162484 /home/sithu/goaccesscf/14mhcmGJ7odb_agents.tcb 7fac6cd5a000-7fac6cd9b000 rw-s 00000000 08:02 14162483 /home/sithu/goaccesscf/14mb9LuPhDdb_protocols.tcb 7fac6cd9b000-7fac6cddc000 rw-s 00000000 08:02 14162482 /home/sithu/goaccesscf/14mrDr4Xyxdb_methods.tcb 7fac6cddc000-7fac6ce1d000 rw-s 00000000 08:02 14162481 /home/sithu/goaccesscf/14mtT4RpE5db_maxts.tcb 7fac6ce1d000-7fac6ce5e000 rw-s 00000000 08:02 14162480 /home/sithu/goaccesscf/14mqxMIqvVdb_cumts.tcb 7fac6ce5e000-7fac6ce9f000 rw-s 00000000 08:02 14162479 /home/sithu/goaccesscf/14mlGy8sWLdb_bw.tcb 7fac6ce9f000-7fac6cee0000 rw-s 00000000 08:02 14162478 /home/sithu/goaccesscf/14mCk7vdCedb_visitors.tcb 7fac6cee0000-7fac6cf21000 rw-s 00000000 08:02 14162477 /home/sithu/goaccesscf/14mqyAd41odb_hits.tcb 7fac6cf21000-7fac6cf62000 rw-s 00000000 08:02 14162476 /home/sithu/goaccesscf/14mhVLf3n8db_root.tcb 7fac6cf62000-7fac6cfa3000 rw-s 00000000 08:02 14162475 /home/sithu/goaccesscf/14mNOoFYC1db_uniqmap.tcb 7fac6cfa3000-7fac6cfe4000 rw-s 00000000 08:02 14162474 /home/sithu/goaccesscf/14mUTvJ85mdb_datamap.tcb 7fac6cfe4000-7fac6d025000 rw-s 00000000 08:02 14162473 /home/sithu/goaccesscf/14mPyraylLdb_rootmap.tcb 7fac6d025000-7fac6d066000 rw-s 00000000 08:02 14162472 /home/sithu/goaccesscf/14mYaEVnatdb_keymap.tcb 7fac6d066000-7fac6d0a7000 rw-s 00000000 08:02 14162471 /home/sithu/goaccesscf/12mSbeojiKdb_metadata.tcb 7fac6d0a7000-7fac6d0e8000 rw-s 00000000 08:02 14162470 /home/sithu/goaccesscf/12mQhADFJ3db_agents.tcb 7fac6d0e8000-7fac6d129000 rw-s 00000000 08:02 14162469 /home/sithu/goaccesscf/12mnZUBmzLdb_protocols.tcb 7fac6d129000-7fac6d16a000 rw-s 00000000 08:02 14162468 /home/sithu/goaccesscf/12mZdwytqZdb_methods.tcb 7fac6d16a000-7fac6d1ab000 rw-s 00000000 08:02 14162467 /home/sithu/goaccesscf/12mya4oETWdb_maxts.tcb 7fac6d1ab000-7fac6d1ec000 rw-s 00000000 08:02 14162466 /home/sithu/goaccesscf/12mPqasdsbdb_cumts.tcb 7fac6d1ec000-7fac6d22d000 rw-s 00000000 08:02 14162465 /home/sithu/goaccesscf/12mfnHauHPdb_bw.tcb 7fac6d22d000-7fac6d26e000 rw-s 00000000 08:02 14162464 /home/sithu/goaccesscf/12mzgWtUBVdb_visitors.tcb 7fac6d26e000-7fac6d2af000 rw-s 00000000 08:02 14162463 /home/sithu/goaccesscf/12mSwPAkmzdb_hits.tcb 7fac6d2af000-7fac6d2f0000 rw-s 00000000 08:02 14162462 /home/sithu/goaccesscf/12mFNkLlHTdb_root.tcb 7fac6d2f0000-7fac6d331000 rw-s 00000000 08:02 14162461 /home/sithu/goaccesscf/12mX72GlIldb_uniqmap.tcb 7fac6d331000-7fac6d372000 rw-s 00000000 08:02 14162460 /home/sithu/goaccesscf/12mTGFslNMdb_datamap.tcb 7fac6d372000-7fac6d3b3000 rw-s 00000000 08:02 14162459 /home/sithu/goaccesscf/12mKOy1hWmdb_rootmap.tcb 7fac6d3b3000-7fac6d3f4000 rw-s 00000000 08:02 14162458 /home/sithu/goaccesscf/12m8AnaxRWdb_keymap.tcb 7fac6d3f4000-7fac6d435000 rw-s 00000000 08:02 14162457 /home/sithu/goaccesscf/10mJeIHk40db_metadata.tcb 7fac6d435000-7fac6d476000 rw-s 00000000 08:02 14162456 /home/sithu/goaccesscf/10myH8kBlidb_agents.tcb 7fac6d476000-7fac6d4b7000 rw-s 00000000 08:02 14162455 /home/sithu/goaccesscf/10mitnnEKldb_protocols.tcb 7fac6d4b7000-7fac6d4f8000 rw-s 00000000 08:02 14162454 /home/sithu/goaccesscf/10m2ZtMdZEdb_methods.tcb 7fac6d4f8000-7fac6d539000 rw-s 00000000 08:02 14162453 /home/sithu/goaccesscf/10mX8olOvpdb_maxts.tcb 7fac6d539000-7fac6d57a000 rw-s 00000000 08:02 14162452 /home/sithu/goaccesscf/10mLWdYaUrdb_cumts.tcb 7fac6d57a000-7fac6d5bb000 rw-s 00000000 08:02 14162451 /home/sithu/goaccesscf/10m7doXWXPdb_bw.tcb 7fac6d5bb000-7fac6d5fc000 rw-s 00000000 08:02 14162450 /home/sithu/goaccesscf/10mKgxR5uLdb_visitors.tcb 7fac6d5fc000-7fac6d63d000 rw-s 00000000 08:02 14162449 /home/sithu/goaccesscf/10mqI73gt6db_hits.tcb 7fac6d63d000-7fac6d67e000 rw-s 00000000 08:02 14162448 /home/sithu/goaccesscf/10mdeHtPfZdb_root.tcb 7fac6d67e000-7fac6d6bf000 rw-s 00000000 08:02 14162447 /home/sithu/goaccesscf/10mSIBRRYodb_uniqmap.tcb 7fac6d6bf000-7fac6d700000 rw-s 00000000 08:02 14162446 /home/sithu/goaccesscf/10m7OYW2JDdb_datamap.tcb 7fac6d700000-7fac6d741000 rw-s 00000000 08:02 14162445 /home/sithu/goaccesscf/10mFAm3wNtdb_rootmap.tcb 7fac6d741000-7fac6d782000 rw-s 00000000 08:02 14162444 /home/sithu/goaccesscf/10miBywAsadb_keymap.tcb 7fac6d782000-7fac6d7c3000 rw-s 00000000 08:02 14162443 /home/sithu/goaccesscf/7m19pzlhidb_metadata.tcb 7fac6d7c3000-7fac6d804000 rw-s 00000000 08:02 14162442 /home/sithu/goaccesscf/7mPfNJQZ0db_agents.tcb 7fac6d804000-7fac6d845000 rw-s 00000000 08:02 14162441 /home/sithu/goaccesscf/7mnYqsC1vdb_protocols.tcb 7fac6d845000-7fac6d886000 rw-s 00000000 08:02 14162440 /home/sithu/goaccesscf/7me1WG57edb_methods.tcb 7fac6d886000-7fac6d8c7000 rw-s 00000000 08:02 14162439 /home/sithu/goaccesscf/7mycULWmidb_maxts.tcb 7fac6d8c7000-7fac6d908000 rw-s 00000000 08:02 14162438 /home/sithu/goaccesscf/7mUDchacpdb_cumts.tcb 7fac6d908000-7fac6d949000 rw-s 00000000 08:02 14162437 /home/sithu/goaccesscf/7m4EB5eejdb_bw.tcb 7fac6d949000-7fac6d98a000 rw-s 00000000 08:02 14162436 /home/sithu/goaccesscf/7ms6dAiRedb_visitors.tcb 7fac6d98a000-7fac6d9cb000 rw-s 00000000 08:02 14162435 /home/sithu/goaccesscf/7mdMswhHGdb_hits.tcb 7fac6d9cb000-7fac6da0c000 rw-s 00000000 08:02 14162434 /home/sithu/goaccesscf/7mpx0ap2udb_root.tcb 7fac6da0c000-7fac6da4d000 rw-s 00000000 08:02 14162433 /home/sithu/goaccesscf/7m1ADeFrUdb_uniqmap.tcb 7fac6da4d000-7fac6da8e000 rw-s 00000000 08:02 14162432 /home/sithu/goaccesscf/7m8cFFOPpdb_datamap.tcb 7fac6da8e000-7fac6dacf000 rw-s 00000000 08:02 14162431 /home/sithu/goaccesscf/7mlxf9apzdb_rootmap.tcb 7fac6dacf000-7fac6db10000 rw-s 00000000 08:02 14162430 /home/sithu/goaccesscf/7mLS4mQJZdb_keymap.tcb 7fac6db10000-7fac6db51000 rw-s 00000000 08:02 14162429 /home/sithu/goaccesscf/6mf7OIJ85db_metadata.tcb 7fac6db51000-7fac6db92000 rw-s 00000000 08:02 14162428 /home/sithu/goaccesscf/6mxB9JdNedb_agents.tcb 7fac6db92000-7fac6dbd3000 rw-s 00000000 08:02 14162427 /home/sithu/goaccesscf/6mQDcq0N8db_protocols.tcb 7fac6dbd3000-7fac6dc14000 rw-s 00000000 08:02 14162426 /home/sithu/goaccesscf/6mAYFFxQhdb_methods.tcb 7fac6dc14000-7fac6dc55000 rw-s 00000000 08:02 14162425 /home/sithu/goaccesscf/6mtEkp4J1db_maxts.tcb 7fac6dc55000-7fac6dc96000 rw-s 00000000 08:02 14162424 /home/sithu/goaccesscf/6mcBEw2jCdb_cumts.tcb 7fac6dc96000-7fac6dcd7000 rw-s 00000000 08:02 14162423 /home/sithu/goaccesscf/6mBnDIzQ3db_bw.tcb 7fac6dcd7000-7fac6dd18000 rw-s 00000000 08:02 14162422 /home/sithu/goaccesscf/6m5zbEjPUdb_visitors.tcb 7fac6dd18000-7fac6dd59000 rw-s 00000000 08:02 14162421 /home/sithu/goaccesscf/6mVppCyeGdb_hits.tcb 7fac6dd59000-7fac6dd9a000 rw-s 00000000 08:02 14162420 /home/sithu/goaccesscf/6mkqH5tlKdb_root.tcb 7fac6dd9a000-7fac6dddb000 rw-s 00000000 08:02 14162419 /home/sithu/goaccesscf/6m8eclFNCdb_uniqmap.tcb 7fac6dddb000-7fac6de1c000 rw-s 00000000 08:02 14162418 /home/sithu/goaccesscf/6mzMGiiAJdb_datamap.tcb 7fac6de1c000-7fac6de5d000 rw-s 00000000 08:02 14162417 /home/sithu/goaccesscf/6mI9R6ykvdb_rootmap.tcb 7fac6de5d000-7fac6de9e000 rw-s 00000000 08:02 14162416 /home/sithu/goaccesscf/6mN5FFs4Gdb_keymap.tcb 7fac6de9e000-7fac6dedf000 rw-s 00000000 08:02 14162415 /home/sithu/goaccesscf/5mfrDhGD5db_metadata.tcb 7fac6dedf000-7fac6df20000 rw-s 00000000 08:02 14162414 /home/sithu/goaccesscf/5mLxWqQGtdb_agents.tcb 7fac6df20000-7fac6df61000 rw-s 00000000 08:02 14162413 /home/sithu/goaccesscf/5mnrvr0sbdb_protocols.tcb 7fac6df61000-7fac6dfa2000 rw-s 00000000 08:02 14162412 /home/sithu/goaccesscf/5mTpr1g4ddb_methods.tcb 7fac6dfa2000-7fac6dfe3000 rw-s 00000000 08:02 14162411 /home/sithu/goaccesscf/5m2f2Q9uAdb_maxts.tcb 7fac6dfe3000-7fac6e024000 rw-s 00000000 08:02 14162410 /home/sithu/goaccesscf/5mHlM5qN9db_cumts.tcb 7fac6e024000-7fac6e065000 rw-s 00000000 08:02 14162409 /home/sithu/goaccesscf/5m3zZMN5Ldb_bw.tcb 7fac6e065000-7fac6e0a6000 rw-s 00000000 08:02 14162408 /home/sithu/goaccesscf/5m8TFb7pkdb_visitors.tcb 7fac6e0a6000-7fac6e0e7000 rw-s 00000000 08:02 14162407 /home/sithu/goaccesscf/5mOUsMMXQdb_hits.tcb 7fac6e0e7000-7fac6e128000 rw-s 00000000 08:02 14162406 /home/sithu/goaccesscf/5mo736DrSdb_root.tcb 7fac6e128000-7fac6e169000 rw-s 00000000 08:02 14162405 /home/sithu/goaccesscf/5mIe81Uf1db_uniqmap.tcb 7fac6e169000-7fac6e1aa000 rw-s 00000000 08:02 14162404 /home/sithu/goaccesscf/5m5fL8WcFdb_datamap.tcb 7fac6e1aa000-7fac6e1eb000 rw-s 00000000 08:02 14162403 /home/sithu/goaccesscf/5mYSv6nYPdb_rootmap.tcb 7fac6e1eb000-7fac6e22c000 rw-s 00000000 08:02 14162402 /home/sithu/goaccesscf/5mIyqOcsBdb_keymap.tcb 7fac6e22c000-7fac6e26d000 rw-s 00000000 08:02 14162401 /home/sithu/goaccesscf/4mkI92wS4db_metadata.tcb 7fac6e26d000-7fac6e2ae000 rw-s 00000000 08:02 14162400 /home/sithu/goaccesscf/4m5JsyM4tdb_agents.tcb 7fac6e2ae000-7fac6e2ef000 rw-s 00000000 08:02 14162399 /home/sithu/goaccesscf/4m7vvLKjWdb_protocols.tcb 7fac6e2ef000-7fac6e330000 rw-s 00000000 08:02 14162398 /home/sithu/goaccesscf/4m5E4POsTdb_methods.tcb 7fac6e330000-7fac6e371000 rw-s 00000000 08:02 14162397 /home/sithu/goaccesscf/4mHOL4eGndb_maxts.tcb 7fac6e371000-7fac6e3b2000 rw-s 00000000 08:02 14162396 /home/sithu/goaccesscf/4mv3BVyGHdb_cumts.tcb 7fac6e3b2000-7fac6e3f3000 rw-s 00000000 08:02 14162395 /home/sithu/goaccesscf/4mDfNdlswdb_bw.tcb 7fac6e3f3000-7fac6e434000 rw-s 00000000 08:02 14162394 /home/sithu/goaccesscf/4mK9n6l5Cdb_visitors.tcb 7fac6e434000-7fac6e475000 rw-s 00000000 08:02 14162393 /home/sithu/goaccesscf/4mnqXjT0Jdb_hits.tcb 7fac6e475000-7fac6e4b6000 rw-s 00000000 08:02 14162392 /home/sithu/goaccesscf/4mrv4Mba5db_root.tcb 7fac6e4b6000-7fac6e4f7000 rw-s 00000000 08:02 14162391 /home/sithu/goaccesscf/4mA6EiaD7db_uniqmap.tcb 7fac6e4f7000-7fac6e538000 rw-s 00000000 08:02 14162390 /home/sithu/goaccesscf/4mocHwkcddb_datamap.tcb 7fac6e538000-7fac6e579000 rw-s 00000000 08:02 14162389 /home/sithu/goaccesscf/4mYs3qRbGdb_rootmap.tcb 7fac6e579000-7fac6e5ba000 rw-s 00000000 08:02 14162388 /home/sithu/goaccesscf/4mnF5jeiTdb_keymap.tcb 7fac6e5ba000-7fac6e5fb000 rw-s 00000000 08:02 14162387 /home/sithu/goaccesscf/3mK40PhUXdb_metadata.tcb 7fac6e5fb000-7fac6e63c000 rw-s 00000000 08:02 14162386 /home/sithu/goaccesscf/3ma8wHgWrdb_agents.tcb 7fac6e63c000-7fac6e67d000 rw-s 00000000 08:02 14162385 /home/sithu/goaccesscf/3msq7gx33db_protocols.tcb 7fac6e67d000-7fac6e6be000 rw-s 00000000 08:02 14162384 /home/sithu/goaccesscf/3mYydAMccdb_methods.tcb 7fac6e6be000-7fac6e6ff000 rw-s 00000000 08:02 14162383 /home/sithu/goaccesscf/3m6cU86Nadb_maxts.tcb 7fac6e6ff000-7fac6e740000 rw-s 00000000 08:02 14162382 /home/sithu/goaccesscf/3mzapJuazdb_cumts.tcb 7fac6e740000-7fac6e781000 rw-s 00000000 08:02 14162381 /home/sithu/goaccesscf/3mQ8VMF7tdb_bw.tcb 7fac6e781000-7fac6e7c2000 rw-s 00000000 08:02 14162380 /home/sithu/goaccesscf/3mCdZDIeVdb_visitors.tcb 7fac6e7c2000-7fac6e803000 rw-s 00000000 08:02 14162379 /home/sithu/goaccesscf/3m3TT2UN4db_hits.tcb 7fac6e803000-7fac6e844000 rw-s 00000000 08:02 14162378 /home/sithu/goaccesscf/3mcmLQC4mdb_root.tcb 7fac6e844000-7fac6e885000 rw-s 00000000 08:02 14162377 /home/sithu/goaccesscf/3mYwbJW3Ydb_uniqmap.tcb 7fac6e885000-7fac6e8c6000 rw-s 00000000 08:02 14162376 /home/sithu/goaccesscf/3m1ufk8U2db_datamap.tcb 7fac6e8c6000-7fac6e907000 rw-s 00000000 08:02 14162375 /home/sithu/goaccesscf/3mZ0W2KqWdb_rootmap.tcb 7fac6e907000-7fac6e948000 rw-s 00000000 08:02 14162374 /home/sithu/goaccesscf/3mGhjytRwdb_keymap.tcb 7fac6e948000-7fac6e989000 rw-s 00000000 08:02 14162373 /home/sithu/goaccesscf/2mLqTffjPdb_metadata.tcb 7fac6e989000-7fac6e9ca000 rw-s 00000000 08:02 14162372 /home/sithu/goaccesscf/2mlGEIPEbdb_agents.tcb 7fac6e9ca000-7fac6ea0b000 rw-s 00000000 08:02 14162362 /home/sithu/goaccesscf/2m5MvWg4jdb_protocols.tcb 7fac6ea0b000-7fac6ea4c000 rw-s 00000000 08:02 14162347 /home/sithu/goaccesscf/2mUeA0x9qdb_methods.tcb 7fac6ea4c000-7fac6ea8d000 rw-s 00000000 08:02 14162333 /home/sithu/goaccesscf/2mwRsAqYudb_maxts.tcb 7fac6ea8d000-7fac6eace000 rw-s 00000000 08:02 14162329 /home/sithu/goaccesscf/2mpb0tXNKdb_cumts.tcb 7fac6eace000-7fac6eb0f000 rw-s 00000000 08:02 14162326 /home/sithu/goaccesscf/2mTuBo4Bndb_bw.tcb 7fac6eb0f000-7fac6eb50000 rw-s 00000000 08:02 14161337 /home/sithu/goaccesscf/2m7lAsxD5db_visitors.tcb 7fac6eb50000-7fac6eb91000 rw-s 00000000 08:02 14161335 /home/sithu/goaccesscf/2m6ByUug6db_hits.tcb 7fac6eb91000-7fac6ebd2000 rw-s 00000000 08:02 14161334 /home/sithu/goaccesscf/2m6OPgy4Idb_root.tcb 7fac6ebd2000-7fac6ec13000 rw-s 00000000 08:02 14161333 /home/sithu/goaccesscf/2mxA2ZkAEdb_uniqmap.tcb 7fac6ec13000-7fac6ec54000 rw-s 00000000 08:02 14161332 /home/sithu/goaccesscf/2mke2FmTvdb_datamap.tcb 7fac6ec54000-7fac6ec95000 rw-s 00000000 08:02 14161330 /home/sithu/goaccesscf/2mQVKcD5wdb_rootmap.tcb 7fac6ec95000-7fac6ecd6000 rw-s 00000000 08:02 14161329 /home/sithu/goaccesscf/2maMgtxxQdb_keymap.tcb 7fac6ecd6000-7fac6ed17000 rw-s 00000000 08:02 14161328 /home/sithu/goaccesscf/1msKzFOeddb_metadata.tcb 7fac6ed17000-7fac6ed58000 rw-s 00000000 08:02 14161327 /home/sithu/goaccesscf/1ml8RQdIzdb_agents.tcb 7fac6ed58000-7fac6ed99000 rw-s 00000000 08:02 14161326 /home/sithu/goaccesscf/1moInjJ8ydb_protocols.tcb 7fac6ed99000-7fac6edda000 rw-s 00000000 08:02 14161325 /home/sithu/goaccesscf/1mlNimiqUdb_methods.tcb 7fac6edda000-7fac6ee1b000 rw-s 00000000 08:02 14161323 /home/sithu/goaccesscf/1mf5GxoA5db_maxts.tcb 7fac6ee1b000-7fac6ee5c000 rw-s 00000000 08:02 14161322 /home/sithu/goaccesscf/1mx5TJh22db_cumts.tcb 7fac6ee5c000-7fac6ee9d000 rw-s 00000000 08:02 14161321 /home/sithu/goaccesscf/1mX0NfE1idb_bw.tcb 7fac6ee9d000-7fac6eede000 rw-s 00000000 08:02 14161320 /home/sithu/goaccesscf/1m3v7z8ktdb_visitors.tcb 7fac6eede000-7fac6ef1f000 rw-s 00000000 08:02 14161319 /home/sithu/goaccesscf/1mkiWyXefdb_hits.tcb 7fac6ef1f000-7fac6ef60000 rw-s 00000000 08:02 14161318 /home/sithu/goaccesscf/1mecalY3Edb_root.tcb 7fac6ef60000-7fac6efa1000 rw-s 00000000 08:02 14161317 /home/sithu/goaccesscf/1mkDe3T5Bdb_uniqmap.tcb 7fac6efa1000-7fac6efe2000 rw-s 00000000 08:02 14161316 /home/sithu/goaccesscf/1mwCfWRGjdb_datamap.tcb 7fac6efe2000-7fac6f023000 rw-s 00000000 08:02 14161315 /home/sithu/goaccesscf/1mgHiH5jUdb_rootmap.tcb 7fac6f023000-7fac6f064000 rw-s 00000000 08:02 14161314 /home/sithu/goaccesscf/1m7MTklhidb_keymap.tcb 7fac6f064000-7fac6f0a5000 rw-s 00000000 08:02 14161313 /home/sithu/goaccesscf/0mLfRxl8Bdb_metadata.tcb 7fac6f0a5000-7fac6f0e6000 rw-s 00000000 08:02 14161312 /home/sithu/goaccesscf/0mretpvZXdb_agents.tcb 7fac6f0e6000-7fac6f127000 rw-s 00000000 08:02 14161311 /home/sithu/goaccesscf/0mrq0drjkdb_protocols.tcb 7fac6f127000-7fac6f168000 rw-s 00000000 08:02 14161310 /home/sithu/goaccesscf/0mfzqXWq2db_methods.tcb 7fac6f168000-7fac6f1a9000 rw-s 00000000 08:02 14161309 /home/sithu/goaccesscf/0mO7j4OGhdb_maxts.tcb 7fac6f1a9000-7fac6f1ea000 rw-s 00000000 08:02 14161308 /home/sithu/goaccesscf/0m0gwxiiWdb_cumts.tcb 7fac6f1ea000-7fac6f22b000 rw-s 00000000 08:02 14161307 /home/sithu/goaccesscf/0mO0j90Xadb_bw.tcb 7fac6f22b000-7fac6f26c000 rw-s 00000000 08:02 14161306 /home/sithu/goaccesscf/0mKp07u20db_visitors.tcb 7fac6f26c000-7fac6f2ad000 rw-s 00000000 08:02 14161305 /home/sithu/goaccesscf/0miHmfsV8db_hits.tcb 7fac6f2ad000-7fac6f2ee000 rw-s 00000000 08:02 14161304 /home/sithu/goaccesscf/0mz2NoxPpdb_root.tcb 7fac6f2ee000-7fac6f32f000 rw-s 00000000 08:02 14161303 /home/sithu/goaccesscf/0mk0E1Zlodb_uniqmap.tcb 7fac6f32f000-7fac6f370000 rw-s 00000000 08:02 14161302 /home/sithu/goaccesscf/0mo72Tk0ldb_datamap.tcb 7fac6f370000-7fac6f371000 ---p 00000000 00:00 0 7fac6f371000-7fac6fb71000 rw-p 00000000 00:00 0 7fac6fb71000-7fac7384a000 r--s 00000000 08:02 20857540 /usr/share/GeoIP/GeoLite2-City.mmdb 7fac7384a000-7fac73b69000 r--p 00000000 08:02 20332563 /usr/lib/locale/locale-archive 7fac73b69000-7fac73b6c000 r-xp 00000000 08:02 10099021 /lib/x86_64-linux-gnu/libdl-2.23.so 7fac73b6c000-7fac73d6b000 ---p 00003000 08:02 10099021 /lib/x86_64-linux-gnu/libdl-2.23.so 7fac73d6b000-7fac73d6c000 r--p 00002000 08:02 10099021 /lib/x86_64-linux-gnu/libdl-2.23.so 7fac73d6c000-7fac73d6d000 rw-p 00003000 08:02 10099021 /lib/x86_64-linux-gnu/libdl-2.23.so 7fac73d6d000-7fac73e75000 r-xp 00000000 08:02 10099009 /lib/x86_64-linux-gnu/libm-2.23.so 7fac73e75000-7fac74074000 ---p 00108000 08:02 10099009 /lib/x86_64-linux-gnu/libm-2.23.so 7fac74074000-7fac74075000 r--p 00107000 08:02 10099009 /lib/x86_64-linux-gnu/libm-2.23.so 7fac74075000-7fac74076000 rw-p 00108000 08:02 10099009 /lib/x86_64-linux-gnu/libm-2.23.so 7fac74076000-7fac7408f000 r-xp 00000000 08:02 10097020 /lib/x86_64-linux-gnu/libz.so.1.2.8 7fac7408f000-7fac7428e000 ---p 00019000 08:02 10097020 /lib/x86_64-linux-gnu/libz.so.1.2.8 7fac7428e000-7fac7428f000 r--p 00018000 08:02 10097020 /lib/x86_64-linux-gnu/libz.so.1.2.8 7fac7428f000-7fac74290000 rw-p 00019000 08:02 10097020 /lib/x86_64-linux-gnu/libz.so.1.2.8 7fac74290000-7fac7429f000 r-xp 00000000 08:02 10097046 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7fac7429f000-7fac7449e000 ---p 0000f000 08:02 10097046 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7fac7449e000-7fac7449f000 r--p 0000e000 08:02 10097046 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7fac7449f000-7fac744a0000 rw-p 0000f000 08:02 10097046 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7fac744a0000-7fac744b8000 r-xp 00000000 08:02 10099015 /lib/x86_64-linux-gnu/libpthread-2.23.so 7fac744b8000-7fac746b7000 ---p 00018000 08:02 10099015 /lib/x86_64-linux-gnu/libpthread-2.23.so 7fac746b7000-7fac746b8000 r--p 00017000 08:02 10099015 /lib/x86_64-linux-gnu/libpthread-2.23.so 7fac746b8000-7fac746b9000 rw-p 00018000 08:02 10099015 /lib/x86_64-linux-gnu/libpthread-2.23.so 7fac746b9000-7fac746bd000 rw-p 00000000 00:00 0 7fac746bd000-7fac746c2000 r-xp 00000000 08:02 20332556 /usr/lib/x86_64-linux-gnu/libmaxminddb.so.0.0.7 7fac746c2000-7fac748c1000 ---p 00005000 08:02 20332556 /usr/lib/x86_64-linux-gnu/libmaxminddb.so.0.0.7 7fac748c1000-7fac748c2000 r--p 00004000 08:02 20332556 /usr/lib/x86_64-linux-gnu/libmaxminddb.so.0.0.7 7fac748c2000-7fac748c3000 rw-p 00005000 08:02 20332556 /usr/lib/x86_64-linux-gnu/libmaxminddb.so.0.0.7 7fac748c3000-7fac748e8000 r-xp 00000000 08:02 10097219 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7fac748e8000-7fac74ae7000 ---p 00025000 08:02 10097219 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7fac74ae7000-7fac74aeb000 r--p 00024000 08:02 10097219 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7fac74aeb000-7fac74aec000 rw-p 00028000 08:02 10097219 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7fac74aec000-7fac74b19000 r-xp 00000000 08:02 10097129 /lib/x86_64-linux-gnu/libncursesw.so.5.9Aborted (core dumped) ```
1.0
malloc(): smallbin double linked list corrupted - I'm getting this strange error. ``` *** Error in `goaccess': malloc(): smallbin double linked list corrupted: 0x0000000034cea660 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7fac750997e5] /lib/x86_64-linux-gnu/libc.so.6(+0x82651)[0x7fac750a4651] /lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x54)[0x7fac750a6184] /usr/lib/x86_64-linux-gnu/libtokyocabinet.so.9(tcbdbget+0x124)[0x7fac74da0d74] goaccess[0x4375db] goaccess(ht_insert_genstats+0x3e)[0x4385c0] goaccess[0x4227b5] goaccess[0x424149] goaccess[0x4242d6] goaccess[0x4245c6] goaccess[0x424859] goaccess(parse_log+0x1bb)[0x424a4a] goaccess(main+0xc5)[0x415893] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fac75042830] goaccess(_start+0x29)[0x408639] ======= Memory map: ======== 00400000-004a0000 r-xp 00000000 08:02 20714599 /usr/local/bin/goaccess 0069f000-006a0000 r--p 0009f000 08:02 20714599 /usr/local/bin/goaccess 006a0000-006aa000 rw-p 000a0000 08:02 20714599 /usr/local/bin/goaccess 01d55000-39b03000 rw-p 00000000 00:00 0 [heap] 7fac60000000-7fac60021000 rw-p 00000000 00:00 0 7fac60021000-7fac64000000 ---p 00000000 00:00 0 7fac68000000-7fac68021000 rw-p 00000000 00:00 0 7fac68021000-7fac6c000000 ---p 00000000 00:00 0 7fac6caa4000-7fac6caba000 r-xp 00000000 08:02 10097085 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fac6caba000-7fac6ccb9000 ---p 00016000 08:02 10097085 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fac6ccb9000-7fac6ccba000 rw-p 00015000 08:02 10097085 /lib/x86_64-linux-gnu/libgcc_s.so.1 7fac6ccd8000-7fac6cd19000 rw-s 00000000 08:02 14162485 /home/sithu/goaccesscf/14m7AhArjrdb_metadata.tcb 7fac6cd19000-7fac6cd5a000 rw-s 00000000 08:02 14162484 /home/sithu/goaccesscf/14mhcmGJ7odb_agents.tcb 7fac6cd5a000-7fac6cd9b000 rw-s 00000000 08:02 14162483 /home/sithu/goaccesscf/14mb9LuPhDdb_protocols.tcb 7fac6cd9b000-7fac6cddc000 rw-s 00000000 08:02 14162482 /home/sithu/goaccesscf/14mrDr4Xyxdb_methods.tcb 7fac6cddc000-7fac6ce1d000 rw-s 00000000 08:02 14162481 /home/sithu/goaccesscf/14mtT4RpE5db_maxts.tcb 7fac6ce1d000-7fac6ce5e000 rw-s 00000000 08:02 14162480 /home/sithu/goaccesscf/14mqxMIqvVdb_cumts.tcb 7fac6ce5e000-7fac6ce9f000 rw-s 00000000 08:02 14162479 /home/sithu/goaccesscf/14mlGy8sWLdb_bw.tcb 7fac6ce9f000-7fac6cee0000 rw-s 00000000 08:02 14162478 /home/sithu/goaccesscf/14mCk7vdCedb_visitors.tcb 7fac6cee0000-7fac6cf21000 rw-s 00000000 08:02 14162477 /home/sithu/goaccesscf/14mqyAd41odb_hits.tcb 7fac6cf21000-7fac6cf62000 rw-s 00000000 08:02 14162476 /home/sithu/goaccesscf/14mhVLf3n8db_root.tcb 7fac6cf62000-7fac6cfa3000 rw-s 00000000 08:02 14162475 /home/sithu/goaccesscf/14mNOoFYC1db_uniqmap.tcb 7fac6cfa3000-7fac6cfe4000 rw-s 00000000 08:02 14162474 /home/sithu/goaccesscf/14mUTvJ85mdb_datamap.tcb 7fac6cfe4000-7fac6d025000 rw-s 00000000 08:02 14162473 /home/sithu/goaccesscf/14mPyraylLdb_rootmap.tcb 7fac6d025000-7fac6d066000 rw-s 00000000 08:02 14162472 /home/sithu/goaccesscf/14mYaEVnatdb_keymap.tcb 7fac6d066000-7fac6d0a7000 rw-s 00000000 08:02 14162471 /home/sithu/goaccesscf/12mSbeojiKdb_metadata.tcb 7fac6d0a7000-7fac6d0e8000 rw-s 00000000 08:02 14162470 /home/sithu/goaccesscf/12mQhADFJ3db_agents.tcb 7fac6d0e8000-7fac6d129000 rw-s 00000000 08:02 14162469 /home/sithu/goaccesscf/12mnZUBmzLdb_protocols.tcb 7fac6d129000-7fac6d16a000 rw-s 00000000 08:02 14162468 /home/sithu/goaccesscf/12mZdwytqZdb_methods.tcb 7fac6d16a000-7fac6d1ab000 rw-s 00000000 08:02 14162467 /home/sithu/goaccesscf/12mya4oETWdb_maxts.tcb 7fac6d1ab000-7fac6d1ec000 rw-s 00000000 08:02 14162466 /home/sithu/goaccesscf/12mPqasdsbdb_cumts.tcb 7fac6d1ec000-7fac6d22d000 rw-s 00000000 08:02 14162465 /home/sithu/goaccesscf/12mfnHauHPdb_bw.tcb 7fac6d22d000-7fac6d26e000 rw-s 00000000 08:02 14162464 /home/sithu/goaccesscf/12mzgWtUBVdb_visitors.tcb 7fac6d26e000-7fac6d2af000 rw-s 00000000 08:02 14162463 /home/sithu/goaccesscf/12mSwPAkmzdb_hits.tcb 7fac6d2af000-7fac6d2f0000 rw-s 00000000 08:02 14162462 /home/sithu/goaccesscf/12mFNkLlHTdb_root.tcb 7fac6d2f0000-7fac6d331000 rw-s 00000000 08:02 14162461 /home/sithu/goaccesscf/12mX72GlIldb_uniqmap.tcb 7fac6d331000-7fac6d372000 rw-s 00000000 08:02 14162460 /home/sithu/goaccesscf/12mTGFslNMdb_datamap.tcb 7fac6d372000-7fac6d3b3000 rw-s 00000000 08:02 14162459 /home/sithu/goaccesscf/12mKOy1hWmdb_rootmap.tcb 7fac6d3b3000-7fac6d3f4000 rw-s 00000000 08:02 14162458 /home/sithu/goaccesscf/12m8AnaxRWdb_keymap.tcb 7fac6d3f4000-7fac6d435000 rw-s 00000000 08:02 14162457 /home/sithu/goaccesscf/10mJeIHk40db_metadata.tcb 7fac6d435000-7fac6d476000 rw-s 00000000 08:02 14162456 /home/sithu/goaccesscf/10myH8kBlidb_agents.tcb 7fac6d476000-7fac6d4b7000 rw-s 00000000 08:02 14162455 /home/sithu/goaccesscf/10mitnnEKldb_protocols.tcb 7fac6d4b7000-7fac6d4f8000 rw-s 00000000 08:02 14162454 /home/sithu/goaccesscf/10m2ZtMdZEdb_methods.tcb 7fac6d4f8000-7fac6d539000 rw-s 00000000 08:02 14162453 /home/sithu/goaccesscf/10mX8olOvpdb_maxts.tcb 7fac6d539000-7fac6d57a000 rw-s 00000000 08:02 14162452 /home/sithu/goaccesscf/10mLWdYaUrdb_cumts.tcb 7fac6d57a000-7fac6d5bb000 rw-s 00000000 08:02 14162451 /home/sithu/goaccesscf/10m7doXWXPdb_bw.tcb 7fac6d5bb000-7fac6d5fc000 rw-s 00000000 08:02 14162450 /home/sithu/goaccesscf/10mKgxR5uLdb_visitors.tcb 7fac6d5fc000-7fac6d63d000 rw-s 00000000 08:02 14162449 /home/sithu/goaccesscf/10mqI73gt6db_hits.tcb 7fac6d63d000-7fac6d67e000 rw-s 00000000 08:02 14162448 /home/sithu/goaccesscf/10mdeHtPfZdb_root.tcb 7fac6d67e000-7fac6d6bf000 rw-s 00000000 08:02 14162447 /home/sithu/goaccesscf/10mSIBRRYodb_uniqmap.tcb 7fac6d6bf000-7fac6d700000 rw-s 00000000 08:02 14162446 /home/sithu/goaccesscf/10m7OYW2JDdb_datamap.tcb 7fac6d700000-7fac6d741000 rw-s 00000000 08:02 14162445 /home/sithu/goaccesscf/10mFAm3wNtdb_rootmap.tcb 7fac6d741000-7fac6d782000 rw-s 00000000 08:02 14162444 /home/sithu/goaccesscf/10miBywAsadb_keymap.tcb 7fac6d782000-7fac6d7c3000 rw-s 00000000 08:02 14162443 /home/sithu/goaccesscf/7m19pzlhidb_metadata.tcb 7fac6d7c3000-7fac6d804000 rw-s 00000000 08:02 14162442 /home/sithu/goaccesscf/7mPfNJQZ0db_agents.tcb 7fac6d804000-7fac6d845000 rw-s 00000000 08:02 14162441 /home/sithu/goaccesscf/7mnYqsC1vdb_protocols.tcb 7fac6d845000-7fac6d886000 rw-s 00000000 08:02 14162440 /home/sithu/goaccesscf/7me1WG57edb_methods.tcb 7fac6d886000-7fac6d8c7000 rw-s 00000000 08:02 14162439 /home/sithu/goaccesscf/7mycULWmidb_maxts.tcb 7fac6d8c7000-7fac6d908000 rw-s 00000000 08:02 14162438 /home/sithu/goaccesscf/7mUDchacpdb_cumts.tcb 7fac6d908000-7fac6d949000 rw-s 00000000 08:02 14162437 /home/sithu/goaccesscf/7m4EB5eejdb_bw.tcb 7fac6d949000-7fac6d98a000 rw-s 00000000 08:02 14162436 /home/sithu/goaccesscf/7ms6dAiRedb_visitors.tcb 7fac6d98a000-7fac6d9cb000 rw-s 00000000 08:02 14162435 /home/sithu/goaccesscf/7mdMswhHGdb_hits.tcb 7fac6d9cb000-7fac6da0c000 rw-s 00000000 08:02 14162434 /home/sithu/goaccesscf/7mpx0ap2udb_root.tcb 7fac6da0c000-7fac6da4d000 rw-s 00000000 08:02 14162433 /home/sithu/goaccesscf/7m1ADeFrUdb_uniqmap.tcb 7fac6da4d000-7fac6da8e000 rw-s 00000000 08:02 14162432 /home/sithu/goaccesscf/7m8cFFOPpdb_datamap.tcb 7fac6da8e000-7fac6dacf000 rw-s 00000000 08:02 14162431 /home/sithu/goaccesscf/7mlxf9apzdb_rootmap.tcb 7fac6dacf000-7fac6db10000 rw-s 00000000 08:02 14162430 /home/sithu/goaccesscf/7mLS4mQJZdb_keymap.tcb 7fac6db10000-7fac6db51000 rw-s 00000000 08:02 14162429 /home/sithu/goaccesscf/6mf7OIJ85db_metadata.tcb 7fac6db51000-7fac6db92000 rw-s 00000000 08:02 14162428 /home/sithu/goaccesscf/6mxB9JdNedb_agents.tcb 7fac6db92000-7fac6dbd3000 rw-s 00000000 08:02 14162427 /home/sithu/goaccesscf/6mQDcq0N8db_protocols.tcb 7fac6dbd3000-7fac6dc14000 rw-s 00000000 08:02 14162426 /home/sithu/goaccesscf/6mAYFFxQhdb_methods.tcb 7fac6dc14000-7fac6dc55000 rw-s 00000000 08:02 14162425 /home/sithu/goaccesscf/6mtEkp4J1db_maxts.tcb 7fac6dc55000-7fac6dc96000 rw-s 00000000 08:02 14162424 /home/sithu/goaccesscf/6mcBEw2jCdb_cumts.tcb 7fac6dc96000-7fac6dcd7000 rw-s 00000000 08:02 14162423 /home/sithu/goaccesscf/6mBnDIzQ3db_bw.tcb 7fac6dcd7000-7fac6dd18000 rw-s 00000000 08:02 14162422 /home/sithu/goaccesscf/6m5zbEjPUdb_visitors.tcb 7fac6dd18000-7fac6dd59000 rw-s 00000000 08:02 14162421 /home/sithu/goaccesscf/6mVppCyeGdb_hits.tcb 7fac6dd59000-7fac6dd9a000 rw-s 00000000 08:02 14162420 /home/sithu/goaccesscf/6mkqH5tlKdb_root.tcb 7fac6dd9a000-7fac6dddb000 rw-s 00000000 08:02 14162419 /home/sithu/goaccesscf/6m8eclFNCdb_uniqmap.tcb 7fac6dddb000-7fac6de1c000 rw-s 00000000 08:02 14162418 /home/sithu/goaccesscf/6mzMGiiAJdb_datamap.tcb 7fac6de1c000-7fac6de5d000 rw-s 00000000 08:02 14162417 /home/sithu/goaccesscf/6mI9R6ykvdb_rootmap.tcb 7fac6de5d000-7fac6de9e000 rw-s 00000000 08:02 14162416 /home/sithu/goaccesscf/6mN5FFs4Gdb_keymap.tcb 7fac6de9e000-7fac6dedf000 rw-s 00000000 08:02 14162415 /home/sithu/goaccesscf/5mfrDhGD5db_metadata.tcb 7fac6dedf000-7fac6df20000 rw-s 00000000 08:02 14162414 /home/sithu/goaccesscf/5mLxWqQGtdb_agents.tcb 7fac6df20000-7fac6df61000 rw-s 00000000 08:02 14162413 /home/sithu/goaccesscf/5mnrvr0sbdb_protocols.tcb 7fac6df61000-7fac6dfa2000 rw-s 00000000 08:02 14162412 /home/sithu/goaccesscf/5mTpr1g4ddb_methods.tcb 7fac6dfa2000-7fac6dfe3000 rw-s 00000000 08:02 14162411 /home/sithu/goaccesscf/5m2f2Q9uAdb_maxts.tcb 7fac6dfe3000-7fac6e024000 rw-s 00000000 08:02 14162410 /home/sithu/goaccesscf/5mHlM5qN9db_cumts.tcb 7fac6e024000-7fac6e065000 rw-s 00000000 08:02 14162409 /home/sithu/goaccesscf/5m3zZMN5Ldb_bw.tcb 7fac6e065000-7fac6e0a6000 rw-s 00000000 08:02 14162408 /home/sithu/goaccesscf/5m8TFb7pkdb_visitors.tcb 7fac6e0a6000-7fac6e0e7000 rw-s 00000000 08:02 14162407 /home/sithu/goaccesscf/5mOUsMMXQdb_hits.tcb 7fac6e0e7000-7fac6e128000 rw-s 00000000 08:02 14162406 /home/sithu/goaccesscf/5mo736DrSdb_root.tcb 7fac6e128000-7fac6e169000 rw-s 00000000 08:02 14162405 /home/sithu/goaccesscf/5mIe81Uf1db_uniqmap.tcb 7fac6e169000-7fac6e1aa000 rw-s 00000000 08:02 14162404 /home/sithu/goaccesscf/5m5fL8WcFdb_datamap.tcb 7fac6e1aa000-7fac6e1eb000 rw-s 00000000 08:02 14162403 /home/sithu/goaccesscf/5mYSv6nYPdb_rootmap.tcb 7fac6e1eb000-7fac6e22c000 rw-s 00000000 08:02 14162402 /home/sithu/goaccesscf/5mIyqOcsBdb_keymap.tcb 7fac6e22c000-7fac6e26d000 rw-s 00000000 08:02 14162401 /home/sithu/goaccesscf/4mkI92wS4db_metadata.tcb 7fac6e26d000-7fac6e2ae000 rw-s 00000000 08:02 14162400 /home/sithu/goaccesscf/4m5JsyM4tdb_agents.tcb 7fac6e2ae000-7fac6e2ef000 rw-s 00000000 08:02 14162399 /home/sithu/goaccesscf/4m7vvLKjWdb_protocols.tcb 7fac6e2ef000-7fac6e330000 rw-s 00000000 08:02 14162398 /home/sithu/goaccesscf/4m5E4POsTdb_methods.tcb 7fac6e330000-7fac6e371000 rw-s 00000000 08:02 14162397 /home/sithu/goaccesscf/4mHOL4eGndb_maxts.tcb 7fac6e371000-7fac6e3b2000 rw-s 00000000 08:02 14162396 /home/sithu/goaccesscf/4mv3BVyGHdb_cumts.tcb 7fac6e3b2000-7fac6e3f3000 rw-s 00000000 08:02 14162395 /home/sithu/goaccesscf/4mDfNdlswdb_bw.tcb 7fac6e3f3000-7fac6e434000 rw-s 00000000 08:02 14162394 /home/sithu/goaccesscf/4mK9n6l5Cdb_visitors.tcb 7fac6e434000-7fac6e475000 rw-s 00000000 08:02 14162393 /home/sithu/goaccesscf/4mnqXjT0Jdb_hits.tcb 7fac6e475000-7fac6e4b6000 rw-s 00000000 08:02 14162392 /home/sithu/goaccesscf/4mrv4Mba5db_root.tcb 7fac6e4b6000-7fac6e4f7000 rw-s 00000000 08:02 14162391 /home/sithu/goaccesscf/4mA6EiaD7db_uniqmap.tcb 7fac6e4f7000-7fac6e538000 rw-s 00000000 08:02 14162390 /home/sithu/goaccesscf/4mocHwkcddb_datamap.tcb 7fac6e538000-7fac6e579000 rw-s 00000000 08:02 14162389 /home/sithu/goaccesscf/4mYs3qRbGdb_rootmap.tcb 7fac6e579000-7fac6e5ba000 rw-s 00000000 08:02 14162388 /home/sithu/goaccesscf/4mnF5jeiTdb_keymap.tcb 7fac6e5ba000-7fac6e5fb000 rw-s 00000000 08:02 14162387 /home/sithu/goaccesscf/3mK40PhUXdb_metadata.tcb 7fac6e5fb000-7fac6e63c000 rw-s 00000000 08:02 14162386 /home/sithu/goaccesscf/3ma8wHgWrdb_agents.tcb 7fac6e63c000-7fac6e67d000 rw-s 00000000 08:02 14162385 /home/sithu/goaccesscf/3msq7gx33db_protocols.tcb 7fac6e67d000-7fac6e6be000 rw-s 00000000 08:02 14162384 /home/sithu/goaccesscf/3mYydAMccdb_methods.tcb 7fac6e6be000-7fac6e6ff000 rw-s 00000000 08:02 14162383 /home/sithu/goaccesscf/3m6cU86Nadb_maxts.tcb 7fac6e6ff000-7fac6e740000 rw-s 00000000 08:02 14162382 /home/sithu/goaccesscf/3mzapJuazdb_cumts.tcb 7fac6e740000-7fac6e781000 rw-s 00000000 08:02 14162381 /home/sithu/goaccesscf/3mQ8VMF7tdb_bw.tcb 7fac6e781000-7fac6e7c2000 rw-s 00000000 08:02 14162380 /home/sithu/goaccesscf/3mCdZDIeVdb_visitors.tcb 7fac6e7c2000-7fac6e803000 rw-s 00000000 08:02 14162379 /home/sithu/goaccesscf/3m3TT2UN4db_hits.tcb 7fac6e803000-7fac6e844000 rw-s 00000000 08:02 14162378 /home/sithu/goaccesscf/3mcmLQC4mdb_root.tcb 7fac6e844000-7fac6e885000 rw-s 00000000 08:02 14162377 /home/sithu/goaccesscf/3mYwbJW3Ydb_uniqmap.tcb 7fac6e885000-7fac6e8c6000 rw-s 00000000 08:02 14162376 /home/sithu/goaccesscf/3m1ufk8U2db_datamap.tcb 7fac6e8c6000-7fac6e907000 rw-s 00000000 08:02 14162375 /home/sithu/goaccesscf/3mZ0W2KqWdb_rootmap.tcb 7fac6e907000-7fac6e948000 rw-s 00000000 08:02 14162374 /home/sithu/goaccesscf/3mGhjytRwdb_keymap.tcb 7fac6e948000-7fac6e989000 rw-s 00000000 08:02 14162373 /home/sithu/goaccesscf/2mLqTffjPdb_metadata.tcb 7fac6e989000-7fac6e9ca000 rw-s 00000000 08:02 14162372 /home/sithu/goaccesscf/2mlGEIPEbdb_agents.tcb 7fac6e9ca000-7fac6ea0b000 rw-s 00000000 08:02 14162362 /home/sithu/goaccesscf/2m5MvWg4jdb_protocols.tcb 7fac6ea0b000-7fac6ea4c000 rw-s 00000000 08:02 14162347 /home/sithu/goaccesscf/2mUeA0x9qdb_methods.tcb 7fac6ea4c000-7fac6ea8d000 rw-s 00000000 08:02 14162333 /home/sithu/goaccesscf/2mwRsAqYudb_maxts.tcb 7fac6ea8d000-7fac6eace000 rw-s 00000000 08:02 14162329 /home/sithu/goaccesscf/2mpb0tXNKdb_cumts.tcb 7fac6eace000-7fac6eb0f000 rw-s 00000000 08:02 14162326 /home/sithu/goaccesscf/2mTuBo4Bndb_bw.tcb 7fac6eb0f000-7fac6eb50000 rw-s 00000000 08:02 14161337 /home/sithu/goaccesscf/2m7lAsxD5db_visitors.tcb 7fac6eb50000-7fac6eb91000 rw-s 00000000 08:02 14161335 /home/sithu/goaccesscf/2m6ByUug6db_hits.tcb 7fac6eb91000-7fac6ebd2000 rw-s 00000000 08:02 14161334 /home/sithu/goaccesscf/2m6OPgy4Idb_root.tcb 7fac6ebd2000-7fac6ec13000 rw-s 00000000 08:02 14161333 /home/sithu/goaccesscf/2mxA2ZkAEdb_uniqmap.tcb 7fac6ec13000-7fac6ec54000 rw-s 00000000 08:02 14161332 /home/sithu/goaccesscf/2mke2FmTvdb_datamap.tcb 7fac6ec54000-7fac6ec95000 rw-s 00000000 08:02 14161330 /home/sithu/goaccesscf/2mQVKcD5wdb_rootmap.tcb 7fac6ec95000-7fac6ecd6000 rw-s 00000000 08:02 14161329 /home/sithu/goaccesscf/2maMgtxxQdb_keymap.tcb 7fac6ecd6000-7fac6ed17000 rw-s 00000000 08:02 14161328 /home/sithu/goaccesscf/1msKzFOeddb_metadata.tcb 7fac6ed17000-7fac6ed58000 rw-s 00000000 08:02 14161327 /home/sithu/goaccesscf/1ml8RQdIzdb_agents.tcb 7fac6ed58000-7fac6ed99000 rw-s 00000000 08:02 14161326 /home/sithu/goaccesscf/1moInjJ8ydb_protocols.tcb 7fac6ed99000-7fac6edda000 rw-s 00000000 08:02 14161325 /home/sithu/goaccesscf/1mlNimiqUdb_methods.tcb 7fac6edda000-7fac6ee1b000 rw-s 00000000 08:02 14161323 /home/sithu/goaccesscf/1mf5GxoA5db_maxts.tcb 7fac6ee1b000-7fac6ee5c000 rw-s 00000000 08:02 14161322 /home/sithu/goaccesscf/1mx5TJh22db_cumts.tcb 7fac6ee5c000-7fac6ee9d000 rw-s 00000000 08:02 14161321 /home/sithu/goaccesscf/1mX0NfE1idb_bw.tcb 7fac6ee9d000-7fac6eede000 rw-s 00000000 08:02 14161320 /home/sithu/goaccesscf/1m3v7z8ktdb_visitors.tcb 7fac6eede000-7fac6ef1f000 rw-s 00000000 08:02 14161319 /home/sithu/goaccesscf/1mkiWyXefdb_hits.tcb 7fac6ef1f000-7fac6ef60000 rw-s 00000000 08:02 14161318 /home/sithu/goaccesscf/1mecalY3Edb_root.tcb 7fac6ef60000-7fac6efa1000 rw-s 00000000 08:02 14161317 /home/sithu/goaccesscf/1mkDe3T5Bdb_uniqmap.tcb 7fac6efa1000-7fac6efe2000 rw-s 00000000 08:02 14161316 /home/sithu/goaccesscf/1mwCfWRGjdb_datamap.tcb 7fac6efe2000-7fac6f023000 rw-s 00000000 08:02 14161315 /home/sithu/goaccesscf/1mgHiH5jUdb_rootmap.tcb 7fac6f023000-7fac6f064000 rw-s 00000000 08:02 14161314 /home/sithu/goaccesscf/1m7MTklhidb_keymap.tcb 7fac6f064000-7fac6f0a5000 rw-s 00000000 08:02 14161313 /home/sithu/goaccesscf/0mLfRxl8Bdb_metadata.tcb 7fac6f0a5000-7fac6f0e6000 rw-s 00000000 08:02 14161312 /home/sithu/goaccesscf/0mretpvZXdb_agents.tcb 7fac6f0e6000-7fac6f127000 rw-s 00000000 08:02 14161311 /home/sithu/goaccesscf/0mrq0drjkdb_protocols.tcb 7fac6f127000-7fac6f168000 rw-s 00000000 08:02 14161310 /home/sithu/goaccesscf/0mfzqXWq2db_methods.tcb 7fac6f168000-7fac6f1a9000 rw-s 00000000 08:02 14161309 /home/sithu/goaccesscf/0mO7j4OGhdb_maxts.tcb 7fac6f1a9000-7fac6f1ea000 rw-s 00000000 08:02 14161308 /home/sithu/goaccesscf/0m0gwxiiWdb_cumts.tcb 7fac6f1ea000-7fac6f22b000 rw-s 00000000 08:02 14161307 /home/sithu/goaccesscf/0mO0j90Xadb_bw.tcb 7fac6f22b000-7fac6f26c000 rw-s 00000000 08:02 14161306 /home/sithu/goaccesscf/0mKp07u20db_visitors.tcb 7fac6f26c000-7fac6f2ad000 rw-s 00000000 08:02 14161305 /home/sithu/goaccesscf/0miHmfsV8db_hits.tcb 7fac6f2ad000-7fac6f2ee000 rw-s 00000000 08:02 14161304 /home/sithu/goaccesscf/0mz2NoxPpdb_root.tcb 7fac6f2ee000-7fac6f32f000 rw-s 00000000 08:02 14161303 /home/sithu/goaccesscf/0mk0E1Zlodb_uniqmap.tcb 7fac6f32f000-7fac6f370000 rw-s 00000000 08:02 14161302 /home/sithu/goaccesscf/0mo72Tk0ldb_datamap.tcb 7fac6f370000-7fac6f371000 ---p 00000000 00:00 0 7fac6f371000-7fac6fb71000 rw-p 00000000 00:00 0 7fac6fb71000-7fac7384a000 r--s 00000000 08:02 20857540 /usr/share/GeoIP/GeoLite2-City.mmdb 7fac7384a000-7fac73b69000 r--p 00000000 08:02 20332563 /usr/lib/locale/locale-archive 7fac73b69000-7fac73b6c000 r-xp 00000000 08:02 10099021 /lib/x86_64-linux-gnu/libdl-2.23.so 7fac73b6c000-7fac73d6b000 ---p 00003000 08:02 10099021 /lib/x86_64-linux-gnu/libdl-2.23.so 7fac73d6b000-7fac73d6c000 r--p 00002000 08:02 10099021 /lib/x86_64-linux-gnu/libdl-2.23.so 7fac73d6c000-7fac73d6d000 rw-p 00003000 08:02 10099021 /lib/x86_64-linux-gnu/libdl-2.23.so 7fac73d6d000-7fac73e75000 r-xp 00000000 08:02 10099009 /lib/x86_64-linux-gnu/libm-2.23.so 7fac73e75000-7fac74074000 ---p 00108000 08:02 10099009 /lib/x86_64-linux-gnu/libm-2.23.so 7fac74074000-7fac74075000 r--p 00107000 08:02 10099009 /lib/x86_64-linux-gnu/libm-2.23.so 7fac74075000-7fac74076000 rw-p 00108000 08:02 10099009 /lib/x86_64-linux-gnu/libm-2.23.so 7fac74076000-7fac7408f000 r-xp 00000000 08:02 10097020 /lib/x86_64-linux-gnu/libz.so.1.2.8 7fac7408f000-7fac7428e000 ---p 00019000 08:02 10097020 /lib/x86_64-linux-gnu/libz.so.1.2.8 7fac7428e000-7fac7428f000 r--p 00018000 08:02 10097020 /lib/x86_64-linux-gnu/libz.so.1.2.8 7fac7428f000-7fac74290000 rw-p 00019000 08:02 10097020 /lib/x86_64-linux-gnu/libz.so.1.2.8 7fac74290000-7fac7429f000 r-xp 00000000 08:02 10097046 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7fac7429f000-7fac7449e000 ---p 0000f000 08:02 10097046 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7fac7449e000-7fac7449f000 r--p 0000e000 08:02 10097046 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7fac7449f000-7fac744a0000 rw-p 0000f000 08:02 10097046 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 7fac744a0000-7fac744b8000 r-xp 00000000 08:02 10099015 /lib/x86_64-linux-gnu/libpthread-2.23.so 7fac744b8000-7fac746b7000 ---p 00018000 08:02 10099015 /lib/x86_64-linux-gnu/libpthread-2.23.so 7fac746b7000-7fac746b8000 r--p 00017000 08:02 10099015 /lib/x86_64-linux-gnu/libpthread-2.23.so 7fac746b8000-7fac746b9000 rw-p 00018000 08:02 10099015 /lib/x86_64-linux-gnu/libpthread-2.23.so 7fac746b9000-7fac746bd000 rw-p 00000000 00:00 0 7fac746bd000-7fac746c2000 r-xp 00000000 08:02 20332556 /usr/lib/x86_64-linux-gnu/libmaxminddb.so.0.0.7 7fac746c2000-7fac748c1000 ---p 00005000 08:02 20332556 /usr/lib/x86_64-linux-gnu/libmaxminddb.so.0.0.7 7fac748c1000-7fac748c2000 r--p 00004000 08:02 20332556 /usr/lib/x86_64-linux-gnu/libmaxminddb.so.0.0.7 7fac748c2000-7fac748c3000 rw-p 00005000 08:02 20332556 /usr/lib/x86_64-linux-gnu/libmaxminddb.so.0.0.7 7fac748c3000-7fac748e8000 r-xp 00000000 08:02 10097219 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7fac748e8000-7fac74ae7000 ---p 00025000 08:02 10097219 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7fac74ae7000-7fac74aeb000 r--p 00024000 08:02 10097219 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7fac74aeb000-7fac74aec000 rw-p 00028000 08:02 10097219 /lib/x86_64-linux-gnu/libtinfo.so.5.9 7fac74aec000-7fac74b19000 r-xp 00000000 08:02 10097129 /lib/x86_64-linux-gnu/libncursesw.so.5.9Aborted (core dumped) ```
process
malloc smallbin double linked list corrupted i m getting this strange error error in goaccess malloc smallbin double linked list corrupted backtrace lib linux gnu libc so lib linux gnu libc so lib linux gnu libc so libc malloc usr lib linux gnu libtokyocabinet so tcbdbget goaccess goaccess ht insert genstats goaccess goaccess goaccess goaccess goaccess goaccess parse log goaccess main lib linux gnu libc so libc start main goaccess start memory map r xp usr local bin goaccess r p usr local bin goaccess rw p usr local bin goaccess rw p rw p p rw p p r xp lib linux gnu libgcc s so p lib linux gnu libgcc s so rw p lib linux gnu libgcc s so rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb rw s home sithu goaccesscf rootmap tcb rw s home sithu goaccesscf keymap tcb rw s home sithu goaccesscf metadata tcb rw s home sithu goaccesscf agents tcb rw s home sithu goaccesscf protocols tcb rw s home sithu goaccesscf methods tcb rw s home sithu goaccesscf maxts tcb rw s home sithu goaccesscf cumts tcb rw s home sithu goaccesscf bw tcb rw s home sithu goaccesscf visitors tcb rw s home sithu goaccesscf hits tcb rw s home sithu goaccesscf root tcb rw s home sithu goaccesscf uniqmap tcb rw s home sithu goaccesscf datamap tcb p rw p r s usr share geoip city mmdb r p usr lib locale locale archive r xp lib linux gnu libdl so p lib linux gnu libdl so r p lib linux gnu libdl so rw p lib linux gnu libdl so r xp lib linux gnu libm so p lib linux gnu libm so r p lib linux gnu libm so rw p lib linux gnu libm so r xp lib linux gnu libz so p lib linux gnu libz so r p lib linux gnu libz so rw p lib linux gnu libz so r xp lib linux gnu so p lib linux gnu so r p lib linux gnu so rw p lib linux gnu so r xp lib linux gnu libpthread so p lib linux gnu libpthread so r p lib linux gnu libpthread so rw p lib linux gnu libpthread so rw p r xp usr lib linux gnu libmaxminddb so p usr lib linux gnu libmaxminddb so r p usr lib linux gnu libmaxminddb so rw p usr lib linux gnu libmaxminddb so r xp lib linux gnu libtinfo so p lib linux gnu libtinfo so r p lib linux gnu libtinfo so rw p lib linux gnu libtinfo so r xp lib linux gnu libncursesw so core dumped
1
15,407
19,597,959,210
IssuesEvent
2022-01-05 20:22:38
googleapis/google-cloud-dotnet
https://api.github.com/repos/googleapis/google-cloud-dotnet
opened
Your .repo-metadata.json files have a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json files: Result of scan 📈: ``` release_level must be equal to one of the allowed values in apis/Google.Analytics.Admin.V1Alpha/.repo-metadata.json api_shortname field missing from apis/Google.Analytics.Admin.V1Alpha/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Analytics.Data.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Analytics.Data.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Apps.Script.Type/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Area120.Tables.V1Alpha1/.repo-metadata.json api_shortname field missing from apis/Google.Area120.Tables.V1Alpha1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AIPlatform.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AIPlatform.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AccessApproval.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AccessApproval.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ApiGateway.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ApiGateway.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ApigeeConnect.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ApigeeConnect.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AppEngine.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AppEngine.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ArtifactRegistry.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ArtifactRegistry.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ArtifactRegistry.V1Beta2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ArtifactRegistry.V1Beta2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Asset.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Asset.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AssuredWorkloads.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AssuredWorkloads.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AssuredWorkloads.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AssuredWorkloads.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Audit/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AutoML.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AutoML.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.Connection.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.Connection.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.DataTransfer.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.DataTransfer.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.Reservation.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.Reservation.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.Storage.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.Storage.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Bigtable.Admin.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Bigtable.Admin.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Bigtable.Common.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Bigtable.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Bigtable.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Billing.Budgets.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Billing.Budgets.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Billing.Budgets.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Billing.Budgets.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Billing.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Billing.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BinaryAuthorization.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BinaryAuthorization.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BinaryAuthorization.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BinaryAuthorization.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Channel.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Channel.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.CloudBuild.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.CloudBuild.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.CloudDms.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.CloudDms.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Common/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Compute.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Compute.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ContactCenterInsights.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ContactCenterInsights.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Container.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Container.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DataCatalog.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DataCatalog.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DataFusion.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DataFusion.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DataLabeling.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DataLabeling.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DataQnA.V1Alpha/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DataQnA.V1Alpha/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dataflow.V1Beta3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dataflow.V1Beta3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dataproc.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dataproc.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Datastore.Admin.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Datastore.Admin.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Datastore.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Datastore.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Datastream.V1Alpha1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Datastream.V1Alpha1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Debugger.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Debugger.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Deploy.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Deploy.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DevTools.Common/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DevTools.ContainerAnalysis.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DevTools.ContainerAnalysis.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Diagnostics.AspNetCore/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Diagnostics.AspNetCore3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Diagnostics.Common/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dialogflow.Cx.V3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dialogflow.Cx.V3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dialogflow.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dialogflow.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dlp.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dlp.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DocumentAI.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DocumentAI.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DocumentAI.V1Beta3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DocumentAI.V1Beta3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Domains.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Domains.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Domains.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Domains.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ErrorReporting.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ErrorReporting.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.EssentialContacts.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.EssentialContacts.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Eventarc.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Eventarc.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Filestore.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Filestore.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Firestore.Admin.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Firestore.Admin.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Firestore.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Firestore.V1/.repo-metadata.jsonlibrary_type must be equal to one of the allowed values in apis/Google.Cloud.Firestore/.repo-metadata.json release_level must be equal to one of the allowed values in apis/Google.Cloud.Firestore/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Functions.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Functions.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.GSuiteAddOns.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.GSuiteAddOns.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Gaming.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Gaming.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Gaming.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Gaming.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.GkeConnect.Gateway.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.GkeConnect.Gateway.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.GkeHub.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.GkeHub.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.GkeHub.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.GkeHub.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Iam.Admin.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Iam.Admin.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Iam.Credentials.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Iam.Credentials.V1/.repo-metadata.jsonlibrary_type must be equal to one of the allowed values in apis/Google.Cloud.Iam.V1/.repo-metadata.json release_level must be equal to one of the allowed values in apis/Google.Cloud.Iam.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Iap.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Iap.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Ids.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Ids.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Iot.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Iot.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Kms.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Kms.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Language.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Language.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.LifeSciences.V2Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.LifeSciences.V2Beta/.repo-metadata.jsonlibrary_type must be equal to one of the allowed values in apis/Google.Cloud.Location/.repo-metadata.json release_level must be equal to one of the allowed values in apis/Google.Cloud.Location/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Logging.Log4Net/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Logging.NLog/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Logging.Type/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Logging.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Logging.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ManagedIdentities.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ManagedIdentities.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.MediaTranslation.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.MediaTranslation.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Memcache.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Memcache.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Memcache.V1Beta2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Memcache.V1Beta2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Metastore.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Metastore.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Metastore.V1Alpha/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Metastore.V1Alpha/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Metastore.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Metastore.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Monitoring.V3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Monitoring.V3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.NetworkConnectivity.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.NetworkConnectivity.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.NetworkConnectivity.V1Alpha1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.NetworkConnectivity.V1Alpha1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.NetworkManagement.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.NetworkManagement.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.NetworkSecurity.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.NetworkSecurity.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Notebooks.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Notebooks.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Notebooks.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Notebooks.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Orchestration.Airflow.Service.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Orchestration.Airflow.Service.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OrgPolicy.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OrgPolicy.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OrgPolicy.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsConfig.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OsConfig.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsConfig.V1Alpha/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OsConfig.V1Alpha/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsLogin.Common/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsLogin.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OsLogin.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsLogin.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OsLogin.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.PhishingProtection.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.PhishingProtection.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.PolicyTroubleshooter.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.PolicyTroubleshooter.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.PrivateCatalog.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.PrivateCatalog.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Profiler.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Profiler.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.PubSub.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.PubSub.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.RecaptchaEnterprise.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.RecaptchaEnterprise.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.RecaptchaEnterprise.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.RecaptchaEnterprise.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.RecommendationEngine.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.RecommendationEngine.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Recommender.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Recommender.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Redis.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Redis.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Redis.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Redis.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ResourceManager.V3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ResourceManager.V3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ResourceSettings.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ResourceSettings.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Retail.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Retail.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Scheduler.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Scheduler.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecretManager.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecretManager.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecretManager.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecretManager.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Security.PrivateCA.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Security.PrivateCA.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Security.PrivateCA.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Security.PrivateCA.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecurityCenter.Settings.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecurityCenter.Settings.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecurityCenter.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecurityCenter.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecurityCenter.V1P1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecurityCenter.V1P1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceControl.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceControl.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceDirectory.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceDirectory.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceDirectory.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceDirectory.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceManagement.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceManagement.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceUsage.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceUsage.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Shell.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Shell.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.Admin.Database.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Spanner.Admin.Database.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.Admin.Instance.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Spanner.Admin.Instance.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.Common.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.Data/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Spanner.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Speech.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Speech.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Speech.V1P1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Speech.V1P1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Storage.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Storage.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.StorageTransfer.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.StorageTransfer.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Talent.V4/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Talent.V4/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Talent.V4Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Talent.V4Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Tasks.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Tasks.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Tasks.V2Beta3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Tasks.V2Beta3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.TextToSpeech.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.TextToSpeech.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.TextToSpeech.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.TextToSpeech.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Tpu.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Tpu.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Trace.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Trace.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Trace.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Trace.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Translate.V3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Translate.V3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Translation.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Translation.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.VMMigration.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.VMMigration.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Video.Transcoder.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Video.Transcoder.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Video.Transcoder.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Video.Transcoder.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.VideoIntelligence.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.VideoIntelligence.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Vision.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Vision.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.VpcAccess.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.VpcAccess.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.WebRisk.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.WebRisk.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.WebRisk.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.WebRisk.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.WebSecurityScanner.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.WebSecurityScanner.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.Common.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.Common.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.Executions.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Workflows.Executions.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.Executions.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Workflows.Executions.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Workflows.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Workflows.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Identity.AccessContextManager.Type/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Identity.AccessContextManager.V1/.repo-metadata.jsonlibrary_type must be equal to one of the allowed values in apis/Google.LongRunning/.repo-metadata.json release_level must be equal to one of the allowed values in apis/Google.LongRunning/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Grafeas.V1/.repo-metadata.json api_shortname field missing from apis/Grafeas.V1/.repo-metadata.json ``` ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files: Result of scan 📈: ``` release_level must be equal to one of the allowed values in apis/Google.Analytics.Admin.V1Alpha/.repo-metadata.json api_shortname field missing from apis/Google.Analytics.Admin.V1Alpha/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Analytics.Data.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Analytics.Data.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Apps.Script.Type/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Area120.Tables.V1Alpha1/.repo-metadata.json api_shortname field missing from apis/Google.Area120.Tables.V1Alpha1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AIPlatform.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AIPlatform.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AccessApproval.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AccessApproval.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ApiGateway.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ApiGateway.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ApigeeConnect.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ApigeeConnect.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AppEngine.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AppEngine.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ArtifactRegistry.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ArtifactRegistry.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ArtifactRegistry.V1Beta2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ArtifactRegistry.V1Beta2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Asset.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Asset.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AssuredWorkloads.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AssuredWorkloads.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AssuredWorkloads.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AssuredWorkloads.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Audit/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.AutoML.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.AutoML.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.Connection.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.Connection.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.DataTransfer.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.DataTransfer.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.Reservation.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.Reservation.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.Storage.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.Storage.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BigQuery.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BigQuery.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Bigtable.Admin.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Bigtable.Admin.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Bigtable.Common.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Bigtable.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Bigtable.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Billing.Budgets.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Billing.Budgets.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Billing.Budgets.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Billing.Budgets.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Billing.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Billing.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BinaryAuthorization.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BinaryAuthorization.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.BinaryAuthorization.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.BinaryAuthorization.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Channel.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Channel.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.CloudBuild.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.CloudBuild.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.CloudDms.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.CloudDms.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Common/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Compute.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Compute.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ContactCenterInsights.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ContactCenterInsights.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Container.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Container.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DataCatalog.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DataCatalog.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DataFusion.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DataFusion.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DataLabeling.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DataLabeling.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DataQnA.V1Alpha/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DataQnA.V1Alpha/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dataflow.V1Beta3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dataflow.V1Beta3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dataproc.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dataproc.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Datastore.Admin.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Datastore.Admin.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Datastore.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Datastore.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Datastream.V1Alpha1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Datastream.V1Alpha1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Debugger.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Debugger.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Deploy.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Deploy.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DevTools.Common/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DevTools.ContainerAnalysis.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DevTools.ContainerAnalysis.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Diagnostics.AspNetCore/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Diagnostics.AspNetCore3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Diagnostics.Common/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dialogflow.Cx.V3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dialogflow.Cx.V3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dialogflow.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dialogflow.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Dlp.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Dlp.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DocumentAI.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DocumentAI.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.DocumentAI.V1Beta3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.DocumentAI.V1Beta3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Domains.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Domains.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Domains.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Domains.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ErrorReporting.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ErrorReporting.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.EssentialContacts.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.EssentialContacts.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Eventarc.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Eventarc.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Filestore.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Filestore.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Firestore.Admin.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Firestore.Admin.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Firestore.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Firestore.V1/.repo-metadata.jsonlibrary_type must be equal to one of the allowed values in apis/Google.Cloud.Firestore/.repo-metadata.json release_level must be equal to one of the allowed values in apis/Google.Cloud.Firestore/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Functions.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Functions.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.GSuiteAddOns.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.GSuiteAddOns.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Gaming.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Gaming.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Gaming.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Gaming.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.GkeConnect.Gateway.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.GkeConnect.Gateway.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.GkeHub.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.GkeHub.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.GkeHub.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.GkeHub.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Iam.Admin.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Iam.Admin.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Iam.Credentials.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Iam.Credentials.V1/.repo-metadata.jsonlibrary_type must be equal to one of the allowed values in apis/Google.Cloud.Iam.V1/.repo-metadata.json release_level must be equal to one of the allowed values in apis/Google.Cloud.Iam.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Iap.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Iap.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Ids.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Ids.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Iot.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Iot.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Kms.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Kms.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Language.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Language.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.LifeSciences.V2Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.LifeSciences.V2Beta/.repo-metadata.jsonlibrary_type must be equal to one of the allowed values in apis/Google.Cloud.Location/.repo-metadata.json release_level must be equal to one of the allowed values in apis/Google.Cloud.Location/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Logging.Log4Net/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Logging.NLog/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Logging.Type/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Logging.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Logging.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ManagedIdentities.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ManagedIdentities.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.MediaTranslation.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.MediaTranslation.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Memcache.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Memcache.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Memcache.V1Beta2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Memcache.V1Beta2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Metastore.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Metastore.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Metastore.V1Alpha/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Metastore.V1Alpha/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Metastore.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Metastore.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Monitoring.V3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Monitoring.V3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.NetworkConnectivity.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.NetworkConnectivity.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.NetworkConnectivity.V1Alpha1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.NetworkConnectivity.V1Alpha1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.NetworkManagement.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.NetworkManagement.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.NetworkSecurity.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.NetworkSecurity.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Notebooks.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Notebooks.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Notebooks.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Notebooks.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Orchestration.Airflow.Service.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Orchestration.Airflow.Service.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OrgPolicy.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OrgPolicy.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OrgPolicy.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsConfig.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OsConfig.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsConfig.V1Alpha/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OsConfig.V1Alpha/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsLogin.Common/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsLogin.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OsLogin.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.OsLogin.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.OsLogin.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.PhishingProtection.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.PhishingProtection.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.PolicyTroubleshooter.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.PolicyTroubleshooter.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.PrivateCatalog.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.PrivateCatalog.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Profiler.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Profiler.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.PubSub.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.PubSub.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.RecaptchaEnterprise.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.RecaptchaEnterprise.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.RecaptchaEnterprise.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.RecaptchaEnterprise.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.RecommendationEngine.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.RecommendationEngine.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Recommender.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Recommender.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Redis.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Redis.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Redis.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Redis.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ResourceManager.V3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ResourceManager.V3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ResourceSettings.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ResourceSettings.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Retail.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Retail.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Scheduler.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Scheduler.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecretManager.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecretManager.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecretManager.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecretManager.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Security.PrivateCA.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Security.PrivateCA.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Security.PrivateCA.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Security.PrivateCA.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecurityCenter.Settings.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecurityCenter.Settings.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecurityCenter.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecurityCenter.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.SecurityCenter.V1P1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.SecurityCenter.V1P1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceControl.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceControl.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceDirectory.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceDirectory.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceDirectory.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceDirectory.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceManagement.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceManagement.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.ServiceUsage.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.ServiceUsage.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Shell.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Shell.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.Admin.Database.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Spanner.Admin.Database.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.Admin.Instance.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Spanner.Admin.Instance.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.Common.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.Data/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Spanner.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Spanner.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Speech.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Speech.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Speech.V1P1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Speech.V1P1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Storage.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Storage.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.StorageTransfer.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.StorageTransfer.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Talent.V4/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Talent.V4/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Talent.V4Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Talent.V4Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Tasks.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Tasks.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Tasks.V2Beta3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Tasks.V2Beta3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.TextToSpeech.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.TextToSpeech.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.TextToSpeech.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.TextToSpeech.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Tpu.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Tpu.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Trace.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Trace.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Trace.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Trace.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Translate.V3/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Translate.V3/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Translation.V2/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Translation.V2/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.VMMigration.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.VMMigration.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Video.Transcoder.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Video.Transcoder.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Video.Transcoder.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Video.Transcoder.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.VideoIntelligence.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.VideoIntelligence.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Vision.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Vision.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.VpcAccess.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.VpcAccess.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.WebRisk.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.WebRisk.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.WebRisk.V1Beta1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.WebRisk.V1Beta1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.WebSecurityScanner.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.WebSecurityScanner.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.Common.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.Common.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.Executions.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Workflows.Executions.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.Executions.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Workflows.Executions.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.V1/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Workflows.V1/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Cloud.Workflows.V1Beta/.repo-metadata.json api_shortname field missing from apis/Google.Cloud.Workflows.V1Beta/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Identity.AccessContextManager.Type/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Google.Identity.AccessContextManager.V1/.repo-metadata.jsonlibrary_type must be equal to one of the allowed values in apis/Google.LongRunning/.repo-metadata.json release_level must be equal to one of the allowed values in apis/Google.LongRunning/.repo-metadata.jsonrelease_level must be equal to one of the allowed values in apis/Grafeas.V1/.repo-metadata.json api_shortname field missing from apis/Grafeas.V1/.repo-metadata.json ``` ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 release level must be equal to one of the allowed values in apis google analytics admin repo metadata json api shortname field missing from apis google analytics admin repo metadata jsonrelease level must be equal to one of the allowed values in apis google analytics data repo metadata json api shortname field missing from apis google analytics data repo metadata jsonrelease level must be equal to one of the allowed values in apis google apps script type repo metadata jsonrelease level must be equal to one of the allowed values in apis google tables repo metadata json api shortname field missing from apis google tables repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud aiplatform repo metadata json api shortname field missing from apis google cloud aiplatform repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud accessapproval repo metadata json api shortname field missing from apis google cloud accessapproval repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud apigateway repo metadata json api shortname field missing from apis google cloud apigateway repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud apigeeconnect repo metadata json api shortname field missing from apis google cloud apigeeconnect repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud appengine repo metadata json api shortname field missing from apis google cloud appengine repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud artifactregistry repo metadata json api shortname field missing from apis google cloud artifactregistry repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud artifactregistry repo metadata json api shortname field missing from apis google cloud artifactregistry repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud asset repo metadata json api shortname field missing from apis google cloud asset repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud assuredworkloads repo metadata json api shortname field missing from apis google cloud assuredworkloads repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud assuredworkloads repo metadata json api shortname field missing from apis google cloud assuredworkloads repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud audit repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud automl repo metadata json api shortname field missing from apis google cloud automl repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud bigquery connection repo metadata json api shortname field missing from apis google cloud bigquery connection repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud bigquery datatransfer repo metadata json api shortname field missing from apis google cloud bigquery datatransfer repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud bigquery reservation repo metadata json api shortname field missing from apis google cloud bigquery reservation repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud bigquery storage repo metadata json api shortname field missing from apis google cloud bigquery storage repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud bigquery repo metadata json api shortname field missing from apis google cloud bigquery repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud bigtable admin repo metadata json api shortname field missing from apis google cloud bigtable admin repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud bigtable common repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud bigtable repo metadata json api shortname field missing from apis google cloud bigtable repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud billing budgets repo metadata json api shortname field missing from apis google cloud billing budgets repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud billing budgets repo metadata json api shortname field missing from apis google cloud billing budgets repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud billing repo metadata json api shortname field missing from apis google cloud billing repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud binaryauthorization repo metadata json api shortname field missing from apis google cloud binaryauthorization repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud binaryauthorization repo metadata json api shortname field missing from apis google cloud binaryauthorization repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud channel repo metadata json api shortname field missing from apis google cloud channel repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud cloudbuild repo metadata json api shortname field missing from apis google cloud cloudbuild repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud clouddms repo metadata json api shortname field missing from apis google cloud clouddms repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud common repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud compute repo metadata json api shortname field missing from apis google cloud compute repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud contactcenterinsights repo metadata json api shortname field missing from apis google cloud contactcenterinsights repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud container repo metadata json api shortname field missing from apis google cloud container repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud datacatalog repo metadata json api shortname field missing from apis google cloud datacatalog repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud datafusion repo metadata json api shortname field missing from apis google cloud datafusion repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud datalabeling repo metadata json api shortname field missing from apis google cloud datalabeling repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud dataqna repo metadata json api shortname field missing from apis google cloud dataqna repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud dataflow repo metadata json api shortname field missing from apis google cloud dataflow repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud dataproc repo metadata json api shortname field missing from apis google cloud dataproc repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud datastore admin repo metadata json api shortname field missing from apis google cloud datastore admin repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud datastore repo metadata json api shortname field missing from apis google cloud datastore repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud datastream repo metadata json api shortname field missing from apis google cloud datastream repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud debugger repo metadata json api shortname field missing from apis google cloud debugger repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud deploy repo metadata json api shortname field missing from apis google cloud deploy repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud devtools common repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud devtools containeranalysis repo metadata json api shortname field missing from apis google cloud devtools containeranalysis repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud diagnostics aspnetcore repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud diagnostics repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud diagnostics common repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud dialogflow cx repo metadata json api shortname field missing from apis google cloud dialogflow cx repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud dialogflow repo metadata json api shortname field missing from apis google cloud dialogflow repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud dlp repo metadata json api shortname field missing from apis google cloud dlp repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud documentai repo metadata json api shortname field missing from apis google cloud documentai repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud documentai repo metadata json api shortname field missing from apis google cloud documentai repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud domains repo metadata json api shortname field missing from apis google cloud domains repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud domains repo metadata json api shortname field missing from apis google cloud domains repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud errorreporting repo metadata json api shortname field missing from apis google cloud errorreporting repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud essentialcontacts repo metadata json api shortname field missing from apis google cloud essentialcontacts repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud eventarc repo metadata json api shortname field missing from apis google cloud eventarc repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud filestore repo metadata json api shortname field missing from apis google cloud filestore repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud firestore admin repo metadata json api shortname field missing from apis google cloud firestore admin repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud firestore repo metadata json api shortname field missing from apis google cloud firestore repo metadata jsonlibrary type must be equal to one of the allowed values in apis google cloud firestore repo metadata json release level must be equal to one of the allowed values in apis google cloud firestore repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud functions repo metadata json api shortname field missing from apis google cloud functions repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud gsuiteaddons repo metadata json api shortname field missing from apis google cloud gsuiteaddons repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud gaming repo metadata json api shortname field missing from apis google cloud gaming repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud gaming repo metadata json api shortname field missing from apis google cloud gaming repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud gkeconnect gateway repo metadata json api shortname field missing from apis google cloud gkeconnect gateway repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud gkehub repo metadata json api shortname field missing from apis google cloud gkehub repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud gkehub repo metadata json api shortname field missing from apis google cloud gkehub repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud iam admin repo metadata json api shortname field missing from apis google cloud iam admin repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud iam credentials repo metadata json api shortname field missing from apis google cloud iam credentials repo metadata jsonlibrary type must be equal to one of the allowed values in apis google cloud iam repo metadata json release level must be equal to one of the allowed values in apis google cloud iam repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud iap repo metadata json api shortname field missing from apis google cloud iap repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud ids repo metadata json api shortname field missing from apis google cloud ids repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud iot repo metadata json api shortname field missing from apis google cloud iot repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud kms repo metadata json api shortname field missing from apis google cloud kms repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud language repo metadata json api shortname field missing from apis google cloud language repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud lifesciences repo metadata json api shortname field missing from apis google cloud lifesciences repo metadata jsonlibrary type must be equal to one of the allowed values in apis google cloud location repo metadata json release level must be equal to one of the allowed values in apis google cloud location repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud logging repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud logging nlog repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud logging type repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud logging repo metadata json api shortname field missing from apis google cloud logging repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud managedidentities repo metadata json api shortname field missing from apis google cloud managedidentities repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud mediatranslation repo metadata json api shortname field missing from apis google cloud mediatranslation repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud memcache repo metadata json api shortname field missing from apis google cloud memcache repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud memcache repo metadata json api shortname field missing from apis google cloud memcache repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud metastore repo metadata json api shortname field missing from apis google cloud metastore repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud metastore repo metadata json api shortname field missing from apis google cloud metastore repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud metastore repo metadata json api shortname field missing from apis google cloud metastore repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud monitoring repo metadata json api shortname field missing from apis google cloud monitoring repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud networkconnectivity repo metadata json api shortname field missing from apis google cloud networkconnectivity repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud networkconnectivity repo metadata json api shortname field missing from apis google cloud networkconnectivity repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud networkmanagement repo metadata json api shortname field missing from apis google cloud networkmanagement repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud networksecurity repo metadata json api shortname field missing from apis google cloud networksecurity repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud notebooks repo metadata json api shortname field missing from apis google cloud notebooks repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud notebooks repo metadata json api shortname field missing from apis google cloud notebooks repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud orchestration airflow service repo metadata json api shortname field missing from apis google cloud orchestration airflow service repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud orgpolicy repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud orgpolicy repo metadata json api shortname field missing from apis google cloud orgpolicy repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud osconfig repo metadata json api shortname field missing from apis google cloud osconfig repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud osconfig repo metadata json api shortname field missing from apis google cloud osconfig repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud oslogin common repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud oslogin repo metadata json api shortname field missing from apis google cloud oslogin repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud oslogin repo metadata json api shortname field missing from apis google cloud oslogin repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud phishingprotection repo metadata json api shortname field missing from apis google cloud phishingprotection repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud policytroubleshooter repo metadata json api shortname field missing from apis google cloud policytroubleshooter repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud privatecatalog repo metadata json api shortname field missing from apis google cloud privatecatalog repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud profiler repo metadata json api shortname field missing from apis google cloud profiler repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud pubsub repo metadata json api shortname field missing from apis google cloud pubsub repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud recaptchaenterprise repo metadata json api shortname field missing from apis google cloud recaptchaenterprise repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud recaptchaenterprise repo metadata json api shortname field missing from apis google cloud recaptchaenterprise repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud recommendationengine repo metadata json api shortname field missing from apis google cloud recommendationengine repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud recommender repo metadata json api shortname field missing from apis google cloud recommender repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud redis repo metadata json api shortname field missing from apis google cloud redis repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud redis repo metadata json api shortname field missing from apis google cloud redis repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud resourcemanager repo metadata json api shortname field missing from apis google cloud resourcemanager repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud resourcesettings repo metadata json api shortname field missing from apis google cloud resourcesettings repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud retail repo metadata json api shortname field missing from apis google cloud retail repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud scheduler repo metadata json api shortname field missing from apis google cloud scheduler repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud secretmanager repo metadata json api shortname field missing from apis google cloud secretmanager repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud secretmanager repo metadata json api shortname field missing from apis google cloud secretmanager repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud security privateca repo metadata json api shortname field missing from apis google cloud security privateca repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud security privateca repo metadata json api shortname field missing from apis google cloud security privateca repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud securitycenter settings repo metadata json api shortname field missing from apis google cloud securitycenter settings repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud securitycenter repo metadata json api shortname field missing from apis google cloud securitycenter repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud securitycenter repo metadata json api shortname field missing from apis google cloud securitycenter repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud servicecontrol repo metadata json api shortname field missing from apis google cloud servicecontrol repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud servicedirectory repo metadata json api shortname field missing from apis google cloud servicedirectory repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud servicedirectory repo metadata json api shortname field missing from apis google cloud servicedirectory repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud servicemanagement repo metadata json api shortname field missing from apis google cloud servicemanagement repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud serviceusage repo metadata json api shortname field missing from apis google cloud serviceusage repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud shell repo metadata json api shortname field missing from apis google cloud shell repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud spanner admin database repo metadata json api shortname field missing from apis google cloud spanner admin database repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud spanner admin instance repo metadata json api shortname field missing from apis google cloud spanner admin instance repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud spanner common repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud spanner data repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud spanner repo metadata json api shortname field missing from apis google cloud spanner repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud speech repo metadata json api shortname field missing from apis google cloud speech repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud speech repo metadata json api shortname field missing from apis google cloud speech repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud storage repo metadata json api shortname field missing from apis google cloud storage repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud storagetransfer repo metadata json api shortname field missing from apis google cloud storagetransfer repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud talent repo metadata json api shortname field missing from apis google cloud talent repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud talent repo metadata json api shortname field missing from apis google cloud talent repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud tasks repo metadata json api shortname field missing from apis google cloud tasks repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud tasks repo metadata json api shortname field missing from apis google cloud tasks repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud texttospeech repo metadata json api shortname field missing from apis google cloud texttospeech repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud texttospeech repo metadata json api shortname field missing from apis google cloud texttospeech repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud tpu repo metadata json api shortname field missing from apis google cloud tpu repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud trace repo metadata json api shortname field missing from apis google cloud trace repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud trace repo metadata json api shortname field missing from apis google cloud trace repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud translate repo metadata json api shortname field missing from apis google cloud translate repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud translation repo metadata json api shortname field missing from apis google cloud translation repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud vmmigration repo metadata json api shortname field missing from apis google cloud vmmigration repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud video transcoder repo metadata json api shortname field missing from apis google cloud video transcoder repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud video transcoder repo metadata json api shortname field missing from apis google cloud video transcoder repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud videointelligence repo metadata json api shortname field missing from apis google cloud videointelligence repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud vision repo metadata json api shortname field missing from apis google cloud vision repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud vpcaccess repo metadata json api shortname field missing from apis google cloud vpcaccess repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud webrisk repo metadata json api shortname field missing from apis google cloud webrisk repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud webrisk repo metadata json api shortname field missing from apis google cloud webrisk repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud websecurityscanner repo metadata json api shortname field missing from apis google cloud websecurityscanner repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud workflows common repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud workflows common repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud workflows executions repo metadata json api shortname field missing from apis google cloud workflows executions repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud workflows executions repo metadata json api shortname field missing from apis google cloud workflows executions repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud workflows repo metadata json api shortname field missing from apis google cloud workflows repo metadata jsonrelease level must be equal to one of the allowed values in apis google cloud workflows repo metadata json api shortname field missing from apis google cloud workflows repo metadata jsonrelease level must be equal to one of the allowed values in apis google identity accesscontextmanager type repo metadata jsonrelease level must be equal to one of the allowed values in apis google identity accesscontextmanager repo metadata jsonlibrary type must be equal to one of the allowed values in apis google longrunning repo metadata json release level must be equal to one of the allowed values in apis google longrunning repo metadata jsonrelease level must be equal to one of the allowed values in apis grafeas repo metadata json api shortname field missing from apis grafeas repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
18,361
24,492,213,005
IssuesEvent
2022-10-10 04:09:47
phamtanduongtk29/html-css-training
https://api.github.com/repos/phamtanduongtk29/html-css-training
opened
Create and Responsive template section
not yet processing
- Estimates: 2 hours - Create heading - Create image - Create description
1.0
Create and Responsive template section - - Estimates: 2 hours - Create heading - Create image - Create description
process
create and responsive template section estimates hours create heading create image create description
1
245,677
7,889,424,515
IssuesEvent
2018-06-28 04:03:43
ubclaunchpad/inertia
https://api.github.com/repos/ubclaunchpad/inertia
opened
inertia [remote] up cookie not found
todo: priority :fire: type: bug :bug:
wtf? ``` bobbook:inertia robertlin$ inertia --config inertia.bumper.toml provision-test-5 up [WARNING] Configuration version 'latest' does not match your Inertia CLI version 'test' Cookie not found ```
1.0
inertia [remote] up cookie not found - wtf? ``` bobbook:inertia robertlin$ inertia --config inertia.bumper.toml provision-test-5 up [WARNING] Configuration version 'latest' does not match your Inertia CLI version 'test' Cookie not found ```
non_process
inertia up cookie not found wtf bobbook inertia robertlin inertia config inertia bumper toml provision test up configuration version latest does not match your inertia cli version test cookie not found
0
133,448
10,822,163,469
IssuesEvent
2019-11-08 20:31:02
rancher/rke
https://api.github.com/repos/rancher/rke
closed
Error message when pulling new images should be removed
[zube]: To Test kind/enhancement team/ca
**RKE version:** ``` $ rke -v rke version v0.3.0 ``` **Docker version: (`docker version`,`docker info` preferred)** ``` $ docker version Client: Docker Engine - Community Version: 18.09.8 API version: 1.39 Go version: go1.10.8 Git commit: 0dd43dd87f Built: Wed Jul 17 17:38:58 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.8 API version: 1.39 (minimum version 1.12) Go version: go1.10.8 Git commit: 0dd43dd87f Built: Wed Jul 17 17:48:49 2019 OS/Arch: linux/amd64 Experimental: false ``` **Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** MacOS 10.14.6 (rke cli) RancherOS 1.5.4 (nodes) **Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)** AWS **cluster.yml file:** ```yaml authentication: sans: - "52.26.2.20" - "52.12.153.84" - "54.202.231.16" nodes: - address: 52.26.2.20 internal_address: 10.0.1.10 user: rancher role: - worker - controlplane - etcd - address: 52.12.153.84 internal_address: 10.0.1.11 user: rancher role: - worker - controlplane - etcd - address: 54.202.231.16 internal_address: 10.0.1.12 user: rancher role: - worker - controlplane - etcd ssh_agent_auth: true services: etcd: snapshot: true creation: 1h retention: 240h ``` **Steps to Reproduce:** run rke up **Results:** Appears in 0.3, first time images are pulled, it shows an "Error": ``` INFO[0000] Checking if image [rancher/rke-tools:v0.1.50] exists on host [52.26.2.20], try #1 INFO[0000] Image [rancher/rke-tools:v0.1.50] does not exist on host [52.26.2.20]: Error: No such image: rancher/rke-tools:v0.1.50 INFO[0000] Pulling image [rancher/rke-tools:v0.1.50] on host [52.26.2.20], try rancher/rke#1 ``` Specifically, `Error: No such image: rancher/rke-tools:v0.1.50` should be removed. It's misleading to state there is an error the first time you run rke and it needs to pull images. In RKE 0.2, the message was: ``` INFO[0030] [state] Pulling image [rancher/rke-tools:v0.1.42] on host [34.212.178.64] INFO[0037] [state] Successfully pulled image [rancher/rke-tools:v0.1.42] on host [34.212.178.64] ```
1.0
Error message when pulling new images should be removed - **RKE version:** ``` $ rke -v rke version v0.3.0 ``` **Docker version: (`docker version`,`docker info` preferred)** ``` $ docker version Client: Docker Engine - Community Version: 18.09.8 API version: 1.39 Go version: go1.10.8 Git commit: 0dd43dd87f Built: Wed Jul 17 17:38:58 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.8 API version: 1.39 (minimum version 1.12) Go version: go1.10.8 Git commit: 0dd43dd87f Built: Wed Jul 17 17:48:49 2019 OS/Arch: linux/amd64 Experimental: false ``` **Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** MacOS 10.14.6 (rke cli) RancherOS 1.5.4 (nodes) **Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)** AWS **cluster.yml file:** ```yaml authentication: sans: - "52.26.2.20" - "52.12.153.84" - "54.202.231.16" nodes: - address: 52.26.2.20 internal_address: 10.0.1.10 user: rancher role: - worker - controlplane - etcd - address: 52.12.153.84 internal_address: 10.0.1.11 user: rancher role: - worker - controlplane - etcd - address: 54.202.231.16 internal_address: 10.0.1.12 user: rancher role: - worker - controlplane - etcd ssh_agent_auth: true services: etcd: snapshot: true creation: 1h retention: 240h ``` **Steps to Reproduce:** run rke up **Results:** Appears in 0.3, first time images are pulled, it shows an "Error": ``` INFO[0000] Checking if image [rancher/rke-tools:v0.1.50] exists on host [52.26.2.20], try #1 INFO[0000] Image [rancher/rke-tools:v0.1.50] does not exist on host [52.26.2.20]: Error: No such image: rancher/rke-tools:v0.1.50 INFO[0000] Pulling image [rancher/rke-tools:v0.1.50] on host [52.26.2.20], try rancher/rke#1 ``` Specifically, `Error: No such image: rancher/rke-tools:v0.1.50` should be removed. It's misleading to state there is an error the first time you run rke and it needs to pull images. In RKE 0.2, the message was: ``` INFO[0030] [state] Pulling image [rancher/rke-tools:v0.1.42] on host [34.212.178.64] INFO[0037] [state] Successfully pulled image [rancher/rke-tools:v0.1.42] on host [34.212.178.64] ```
non_process
error message when pulling new images should be removed rke version rke v rke version docker version docker version docker info preferred docker version client docker engine community version api version go version git commit built wed jul os arch linux experimental false server docker engine community engine version api version minimum version go version git commit built wed jul os arch linux experimental false operating system and kernel cat etc os release uname r preferred macos rke cli rancheros nodes type provider of hosts virtualbox bare metal aws gce do aws cluster yml file yaml authentication sans nodes address internal address user rancher role worker controlplane etcd address internal address user rancher role worker controlplane etcd address internal address user rancher role worker controlplane etcd ssh agent auth true services etcd snapshot true creation retention steps to reproduce run rke up results appears in first time images are pulled it shows an error info checking if image exists on host try info image does not exist on host error no such image rancher rke tools info pulling image on host try rancher rke specifically error no such image rancher rke tools should be removed it s misleading to state there is an error the first time you run rke and it needs to pull images in rke the message was info pulling image on host info successfully pulled image on host
0
19,694
26,047,573,451
IssuesEvent
2022-12-22 15:37:48
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Stages, jobs and pipeline agents
doc-enhancement devops/prod Pri2 devops-cicd-process/tech
When stages execute in parallel, are they free to run on any free pipeline agent in the pool? When stages execute in series, are they free to change to some other free agent in the pool? Can jobs run in parallel and on different free agents? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: d322215c-8025-4f21-0700-7dfa7dc5c46e * Version Independent ID: 141fcdbb-8394-525b-bb29-eff9a693a9c4 * Content: [Stages in Azure Pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/stages.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/stages.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Stages, jobs and pipeline agents - When stages execute in parallel, are they free to run on any free pipeline agent in the pool? When stages execute in series, are they free to change to some other free agent in the pool? Can jobs run in parallel and on different free agents? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: d322215c-8025-4f21-0700-7dfa7dc5c46e * Version Independent ID: 141fcdbb-8394-525b-bb29-eff9a693a9c4 * Content: [Stages in Azure Pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/stages.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/stages.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
stages jobs and pipeline agents when stages execute in parallel are they free to run on any free pipeline agent in the pool when stages execute in series are they free to change to some other free agent in the pool can jobs run in parallel and on different free agents document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
136,121
11,038,098,344
IssuesEvent
2019-12-08 11:39:26
ably/ably-cocoa
https://api.github.com/repos/ably/ably-cocoa
opened
Flaky test: RTN5 (basic operations should work
test suite
Pretty often, in Travis, we get a timeout: ``` [10:42:01]: ▸ ✗ Connection__basic_operations_should_work_simultaneously, Waited more than 10.0 seconds ``` This doesn't happen on my machine, and given this is a timeout on stressful test, we're probably just asking for too much. But we should double-check this isn't a deadlock or something.
1.0
Flaky test: RTN5 (basic operations should work - Pretty often, in Travis, we get a timeout: ``` [10:42:01]: ▸ ✗ Connection__basic_operations_should_work_simultaneously, Waited more than 10.0 seconds ``` This doesn't happen on my machine, and given this is a timeout on stressful test, we're probably just asking for too much. But we should double-check this isn't a deadlock or something.
non_process
flaky test basic operations should work pretty often in travis we get a timeout ▸ ✗ connection basic operations should work simultaneously waited more than seconds this doesn t happen on my machine and given this is a timeout on stressful test we re probably just asking for too much but we should double check this isn t a deadlock or something
0
27,953
11,579,337,315
IssuesEvent
2020-02-21 17:42:03
mwilliams7197/node
https://api.github.com/repos/mwilliams7197/node
opened
CVE-2011-4969 (Medium) detected in jquery-1.6.2.js, jquery-1.4.4.min.js
security vulnerability
## CVE-2011-4969 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.6.2.js</b>, <b>jquery-1.4.4.min.js</b></p></summary> <p> <details><summary><b>jquery-1.6.2.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.6.2/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.6.2/jquery.js</a></p> <p>Path to dependency file: /tmp/ws-scm/node/deps/npm/node_modules/http-proxy/examples/node_modules/connect/node_modules/qs/test/browser/index.html</p> <p>Path to vulnerable library: /node/deps/npm/node_modules/http-proxy/examples/node_modules/connect/node_modules/qs/test/browser/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.6.2.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.4.4.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js</a></p> <p>Path to dependency file: /tmp/ws-scm/node/deps/npm/node_modules/couchapp/boiler/attachments/index.html</p> <p>Path to vulnerable library: /node/deps/npm/node_modules/couchapp/boiler/attachments/jquery-1.4.4.min.js,/node/deps/npm/node_modules/npm-registry-couchapp/www/attachments/jquery-1.4.4.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.4.4.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://api.github.com/repos/mwilliams7197/node/commits/5bf1758609f054f0bb5ab5405959dd93758098c5">5bf1758609f054f0bb5ab5405959dd93758098c5</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-site scripting (XSS) vulnerability in jQuery before 1.6.3, when using location.hash to select elements, allows remote attackers to inject arbitrary web script or HTML via a crafted tag. <p>Publish Date: 2013-03-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2011-4969>CVE-2011-4969</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-4969">https://nvd.nist.gov/vuln/detail/CVE-2011-4969</a></p> <p>Release Date: 2013-03-08</p> <p>Fix Resolution: 1.6.3</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.6.2","isTransitiveDependency":false,"dependencyTree":"jquery:1.6.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.6.3"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.4.4","isTransitiveDependency":false,"dependencyTree":"jquery:1.4.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.6.3"}],"vulnerabilityIdentifier":"CVE-2011-4969","vulnerabilityDetails":"Cross-site scripting (XSS) vulnerability in jQuery before 1.6.3, when using location.hash to select elements, allows remote attackers to inject arbitrary web script or HTML via a crafted tag.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2011-4969","cvss2Severity":"medium","cvss2Score":"4.3","extraData":{}}</REMEDIATE> -->
True
CVE-2011-4969 (Medium) detected in jquery-1.6.2.js, jquery-1.4.4.min.js - ## CVE-2011-4969 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.6.2.js</b>, <b>jquery-1.4.4.min.js</b></p></summary> <p> <details><summary><b>jquery-1.6.2.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.6.2/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.6.2/jquery.js</a></p> <p>Path to dependency file: /tmp/ws-scm/node/deps/npm/node_modules/http-proxy/examples/node_modules/connect/node_modules/qs/test/browser/index.html</p> <p>Path to vulnerable library: /node/deps/npm/node_modules/http-proxy/examples/node_modules/connect/node_modules/qs/test/browser/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.6.2.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.4.4.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.4.4/jquery.min.js</a></p> <p>Path to dependency file: /tmp/ws-scm/node/deps/npm/node_modules/couchapp/boiler/attachments/index.html</p> <p>Path to vulnerable library: /node/deps/npm/node_modules/couchapp/boiler/attachments/jquery-1.4.4.min.js,/node/deps/npm/node_modules/npm-registry-couchapp/www/attachments/jquery-1.4.4.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.4.4.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://api.github.com/repos/mwilliams7197/node/commits/5bf1758609f054f0bb5ab5405959dd93758098c5">5bf1758609f054f0bb5ab5405959dd93758098c5</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-site scripting (XSS) vulnerability in jQuery before 1.6.3, when using location.hash to select elements, allows remote attackers to inject arbitrary web script or HTML via a crafted tag. <p>Publish Date: 2013-03-08 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2011-4969>CVE-2011-4969</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-4969">https://nvd.nist.gov/vuln/detail/CVE-2011-4969</a></p> <p>Release Date: 2013-03-08</p> <p>Fix Resolution: 1.6.3</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.6.2","isTransitiveDependency":false,"dependencyTree":"jquery:1.6.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.6.3"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.4.4","isTransitiveDependency":false,"dependencyTree":"jquery:1.4.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.6.3"}],"vulnerabilityIdentifier":"CVE-2011-4969","vulnerabilityDetails":"Cross-site scripting (XSS) vulnerability in jQuery before 1.6.3, when using location.hash to select elements, allows remote attackers to inject arbitrary web script or HTML via a crafted tag.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2011-4969","cvss2Severity":"medium","cvss2Score":"4.3","extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in jquery js jquery min js cve medium severity vulnerability vulnerable libraries jquery js jquery min js jquery js javascript library for dom operations library home page a href path to dependency file tmp ws scm node deps npm node modules http proxy examples node modules connect node modules qs test browser index html path to vulnerable library node deps npm node modules http proxy examples node modules connect node modules qs test browser jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm node deps npm node modules couchapp boiler attachments index html path to vulnerable library node deps npm node modules couchapp boiler attachments jquery min js node deps npm node modules npm registry couchapp www attachments jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details cross site scripting xss vulnerability in jquery before when using location hash to select elements allows remote attackers to inject arbitrary web script or html via a crafted tag publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails cross site scripting xss vulnerability in jquery before when using location hash to select elements allows remote attackers to inject arbitrary web script or html via a crafted tag vulnerabilityurl
0
122,113
26,088,552,630
IssuesEvent
2022-12-26 07:51:00
sebastianbergmann/phpunit
https://api.github.com/repos/sebastianbergmann/phpunit
closed
Fatal error instead of PHPUnit warning when no code coverage driver is available
feature/test-runner feature/code-coverage
``` An error occurred inside PHPUnit. Message: No code coverage driver available Location: /usr/local/src/php-code-coverage/src/Driver/Selector.php:41 #0 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/src/Runner/CodeCoverage.php(51): SebastianBergmann\CodeCoverage\Driver\Selector->forLineCoverage() #1 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/src/TextUI/TestRunner.php(319): PHPUnit\Runner\CodeCoverage::activate() #2 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/src/TextUI/Application.php(107): PHPUnit\TextUI\TestRunner->run() #3 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/src/TextUI/Application.php(78): PHPUnit\TextUI\Application->run() #4 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/phpunit(90): PHPUnit\TextUI\Application::main() #5 /usr/local/src/php-code-coverage/vendor/bin/phpunit(123): include('...') #6 {main} ```
1.0
Fatal error instead of PHPUnit warning when no code coverage driver is available - ``` An error occurred inside PHPUnit. Message: No code coverage driver available Location: /usr/local/src/php-code-coverage/src/Driver/Selector.php:41 #0 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/src/Runner/CodeCoverage.php(51): SebastianBergmann\CodeCoverage\Driver\Selector->forLineCoverage() #1 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/src/TextUI/TestRunner.php(319): PHPUnit\Runner\CodeCoverage::activate() #2 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/src/TextUI/Application.php(107): PHPUnit\TextUI\TestRunner->run() #3 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/src/TextUI/Application.php(78): PHPUnit\TextUI\Application->run() #4 /usr/local/src/php-code-coverage/vendor/phpunit/phpunit/phpunit(90): PHPUnit\TextUI\Application::main() #5 /usr/local/src/php-code-coverage/vendor/bin/phpunit(123): include('...') #6 {main} ```
non_process
fatal error instead of phpunit warning when no code coverage driver is available an error occurred inside phpunit message no code coverage driver available location usr local src php code coverage src driver selector php usr local src php code coverage vendor phpunit phpunit src runner codecoverage php sebastianbergmann codecoverage driver selector forlinecoverage usr local src php code coverage vendor phpunit phpunit src textui testrunner php phpunit runner codecoverage activate usr local src php code coverage vendor phpunit phpunit src textui application php phpunit textui testrunner run usr local src php code coverage vendor phpunit phpunit src textui application php phpunit textui application run usr local src php code coverage vendor phpunit phpunit phpunit phpunit textui application main usr local src php code coverage vendor bin phpunit include main
0
89,562
25,836,899,316
IssuesEvent
2022-12-12 20:24:42
xamarin/xamarin-android
https://api.github.com/repos/xamarin/xamarin-android
closed
Xamarin.Android project build fails after upgrading to Visual Studio 17.3.5 (from 17.2.5) due to missing .dll in AOT folder and aapt_rules.txt
Area: App+Library Build
### Android application type Classic Xamarin.Android (MonoAndroid12.0, etc.) ### Affected platform version VS 17.3.5 ### Description Not a single change in our project or CI script, just VS got update to the currently latest one and we experience 2 strange and blocker side-effect during AOT apk build, that can be strangely workaround if we get rid of those packages, but still nothing change on our side at all aside the VS update: First: It looks like some .dll is not getting copied/generated into the target folder anymore like Humanizer or PublicHoliday give us this error at the very end of the AOT process when it tries to zip the aot files into a zip: `"X.Android.csproj" (SignAndroidPackage target) (1) -> (_BuildApkEmbed target) -> C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(2049,3): error XABLD7028: Sy [MissingDllDuringAOT.zip](https://github.com/xamarin/xamarin-android/files/9745881/MissingDllDuringAOT.zip) stem.IO.FileNotFoundException: Could not find file 'X.Android\obj\DEV-Release\110\aot\arm64-v8a\libaot-PublicHoliday.dll.so'. [X.Android.csproj] C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(2049,3): error XABLD7028: File name: 'X.Android\obj\DEV-Release\110\aot\arm64-v8a\libaot-PublicHoliday.dll.so' ` Second error that even if we get those things fixed, we get now these errors during the "startup" phase of the build that we have no idea so far why and can't even workaround it: `MSBUILD : java.exe error JAVA0000: Error in obj\Release\110\aapt_rules.txt at line 289, column 1: [C:\Jenkins\workspace\Demo\Demo.Android\Demo.Android.csproj] MSBUILD : java.exe error JAVA0000: Expected char '-' at obj\Release\110\aapt_rules.txt:289:1 [C:\Jenkins\workspace\Demo\Demo.Android\Demo.Android.csproj] MSBUILD : java.exe error JAVA0000: nit>(...); } [C:\Jenkins\workspace\Demo\Demo.Android\Demo.Android.csproj] MSBUILD : java.exe error JAVA0000: Compilation failed` I hope someone can help why these now happen and some possible workaround to fix these issues ### Steps to Reproduce I can reproduce the error by trying to build the attached [MissingDllDuringAOT.zip](https://github.com/xamarin/xamarin-android/files/9745893/MissingDllDuringAOT.zip) in Release mode. ### Did you find any workaround? _No response_ ### Relevant log output `Severity Code Description Project File Line Suppression State Error XABBA7028: System.IO.FileNotFoundException: Could not find file 'C:\Users\Charlie\Downloads\MissingDllDuringAOT\MissingDllDuringAOT\MissingDllDuringAOT\obj\Release\110\aot\armeabi-v7a\libaot-Humanizer.dll.so'. File name: 'C:\Users\Charlie\Downloads\MissingDllDuringAOT\MissingDllDuringAOT\MissingDllDuringAOT\obj\Release\110\aot\armeabi-v7a\libaot-Humanizer.dll.so' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.File.GetAttributes(String path) at Xamarin.Tools.Zip.WindowsPlatformServices.IsDirectory(ZipArchive archive, String path, Boolean& result) in /Users/runner/work/1/s/LibZipSharp/Xamarin.Tools.Zip/WindowsPlatformServices.cs:line 47 at Xamarin.Tools.Zip.PlatformServices.CallServices(Func`2 code) in /Users/runner/work/1/s/LibZipSharp/Xamarin.Tools.Zip/PlatformServices.cs:line 166 at Xamarin.Tools.Zip.PlatformServices.IsDirectory(ZipArchive archive, String path) in /Users/runner/work/1/s/LibZipSharp/Xamarin.Tools.Zip/PlatformServices.cs:line 74 at Xamarin.Tools.Zip.ZipArchive.AddFile(String sourcePath, String archivePath, EntryPermissions permissions, CompressionMethod compressionMethod, Boolean overwriteExisting) in /Users/runner/work/1/s/LibZipSharp/Xamarin.Tools.Zip/ZipArchive.cs:line 406 at Xamarin.Android.Tasks.BuildApk.ExecuteWithAbi(String[] supportedAbis, String apkInputPath, String apkOutputPath, Boolean debug, Boolean compress, IDictionary`2 compressedAssembliesInfo, String assemblyStoreApkName) at Xamarin.Android.Tasks.BuildApk.RunTask() at Microsoft.Android.Build.Tasks.AndroidTask.Execute() in /Users/runner/work/1/s/xamarin-android/external/xamarin-android-tools/src/Microsoft.Android.Build.BaseTasks/AndroidTask.cs:line 17 0 _No response_`
1.0
Xamarin.Android project build fails after upgrading to Visual Studio 17.3.5 (from 17.2.5) due to missing .dll in AOT folder and aapt_rules.txt - ### Android application type Classic Xamarin.Android (MonoAndroid12.0, etc.) ### Affected platform version VS 17.3.5 ### Description Not a single change in our project or CI script, just VS got update to the currently latest one and we experience 2 strange and blocker side-effect during AOT apk build, that can be strangely workaround if we get rid of those packages, but still nothing change on our side at all aside the VS update: First: It looks like some .dll is not getting copied/generated into the target folder anymore like Humanizer or PublicHoliday give us this error at the very end of the AOT process when it tries to zip the aot files into a zip: `"X.Android.csproj" (SignAndroidPackage target) (1) -> (_BuildApkEmbed target) -> C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(2049,3): error XABLD7028: Sy [MissingDllDuringAOT.zip](https://github.com/xamarin/xamarin-android/files/9745881/MissingDllDuringAOT.zip) stem.IO.FileNotFoundException: Could not find file 'X.Android\obj\DEV-Release\110\aot\arm64-v8a\libaot-PublicHoliday.dll.so'. [X.Android.csproj] C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(2049,3): error XABLD7028: File name: 'X.Android\obj\DEV-Release\110\aot\arm64-v8a\libaot-PublicHoliday.dll.so' ` Second error that even if we get those things fixed, we get now these errors during the "startup" phase of the build that we have no idea so far why and can't even workaround it: `MSBUILD : java.exe error JAVA0000: Error in obj\Release\110\aapt_rules.txt at line 289, column 1: [C:\Jenkins\workspace\Demo\Demo.Android\Demo.Android.csproj] MSBUILD : java.exe error JAVA0000: Expected char '-' at obj\Release\110\aapt_rules.txt:289:1 [C:\Jenkins\workspace\Demo\Demo.Android\Demo.Android.csproj] MSBUILD : java.exe error JAVA0000: nit>(...); } [C:\Jenkins\workspace\Demo\Demo.Android\Demo.Android.csproj] MSBUILD : java.exe error JAVA0000: Compilation failed` I hope someone can help why these now happen and some possible workaround to fix these issues ### Steps to Reproduce I can reproduce the error by trying to build the attached [MissingDllDuringAOT.zip](https://github.com/xamarin/xamarin-android/files/9745893/MissingDllDuringAOT.zip) in Release mode. ### Did you find any workaround? _No response_ ### Relevant log output `Severity Code Description Project File Line Suppression State Error XABBA7028: System.IO.FileNotFoundException: Could not find file 'C:\Users\Charlie\Downloads\MissingDllDuringAOT\MissingDllDuringAOT\MissingDllDuringAOT\obj\Release\110\aot\armeabi-v7a\libaot-Humanizer.dll.so'. File name: 'C:\Users\Charlie\Downloads\MissingDllDuringAOT\MissingDllDuringAOT\MissingDllDuringAOT\obj\Release\110\aot\armeabi-v7a\libaot-Humanizer.dll.so' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.File.GetAttributes(String path) at Xamarin.Tools.Zip.WindowsPlatformServices.IsDirectory(ZipArchive archive, String path, Boolean& result) in /Users/runner/work/1/s/LibZipSharp/Xamarin.Tools.Zip/WindowsPlatformServices.cs:line 47 at Xamarin.Tools.Zip.PlatformServices.CallServices(Func`2 code) in /Users/runner/work/1/s/LibZipSharp/Xamarin.Tools.Zip/PlatformServices.cs:line 166 at Xamarin.Tools.Zip.PlatformServices.IsDirectory(ZipArchive archive, String path) in /Users/runner/work/1/s/LibZipSharp/Xamarin.Tools.Zip/PlatformServices.cs:line 74 at Xamarin.Tools.Zip.ZipArchive.AddFile(String sourcePath, String archivePath, EntryPermissions permissions, CompressionMethod compressionMethod, Boolean overwriteExisting) in /Users/runner/work/1/s/LibZipSharp/Xamarin.Tools.Zip/ZipArchive.cs:line 406 at Xamarin.Android.Tasks.BuildApk.ExecuteWithAbi(String[] supportedAbis, String apkInputPath, String apkOutputPath, Boolean debug, Boolean compress, IDictionary`2 compressedAssembliesInfo, String assemblyStoreApkName) at Xamarin.Android.Tasks.BuildApk.RunTask() at Microsoft.Android.Build.Tasks.AndroidTask.Execute() in /Users/runner/work/1/s/xamarin-android/external/xamarin-android-tools/src/Microsoft.Android.Build.BaseTasks/AndroidTask.cs:line 17 0 _No response_`
non_process
xamarin android project build fails after upgrading to visual studio from due to missing dll in aot folder and aapt rules txt android application type classic xamarin android etc affected platform version vs description not a single change in our project or ci script just vs got update to the currently latest one and we experience strange and blocker side effect during aot apk build that can be strangely workaround if we get rid of those packages but still nothing change on our side at all aside the vs update first it looks like some dll is not getting copied generated into the target folder anymore like humanizer or publicholiday give us this error at the very end of the aot process when it tries to zip the aot files into a zip x android csproj signandroidpackage target buildapkembed target c program files microsoft visual studio professional msbuild xamarin android xamarin android common targets error sy stem io filenotfoundexception could not find file x android obj dev release aot libaot publicholiday dll so c program files microsoft visual studio professional msbuild xamarin android xamarin android common targets error file name x android obj dev release aot libaot publicholiday dll so second error that even if we get those things fixed we get now these errors during the startup phase of the build that we have no idea so far why and can t even workaround it msbuild java exe error error in obj release aapt rules txt at line column msbuild java exe error expected char at obj release aapt rules txt msbuild java exe error nit msbuild java exe error compilation failed i hope someone can help why these now happen and some possible workaround to fix these issues steps to reproduce i can reproduce the error by trying to build the attached in release mode did you find any workaround no response relevant log output severity code description project file line suppression state error system io filenotfoundexception could not find file c users charlie downloads missingdllduringaot missingdllduringaot missingdllduringaot obj release aot armeabi libaot humanizer dll so file name c users charlie downloads missingdllduringaot missingdllduringaot missingdllduringaot obj release aot armeabi libaot humanizer dll so at system io error winioerror errorcode string maybefullpath at system io file getattributes string path at xamarin tools zip windowsplatformservices isdirectory ziparchive archive string path boolean result in users runner work s libzipsharp xamarin tools zip windowsplatformservices cs line at xamarin tools zip platformservices callservices func code in users runner work s libzipsharp xamarin tools zip platformservices cs line at xamarin tools zip platformservices isdirectory ziparchive archive string path in users runner work s libzipsharp xamarin tools zip platformservices cs line at xamarin tools zip ziparchive addfile string sourcepath string archivepath entrypermissions permissions compressionmethod compressionmethod boolean overwriteexisting in users runner work s libzipsharp xamarin tools zip ziparchive cs line at xamarin android tasks buildapk executewithabi string supportedabis string apkinputpath string apkoutputpath boolean debug boolean compress idictionary compressedassembliesinfo string assemblystoreapkname at xamarin android tasks buildapk runtask at microsoft android build tasks androidtask execute in users runner work s xamarin android external xamarin android tools src microsoft android build basetasks androidtask cs line no response
0
5,132
2,775,069,613
IssuesEvent
2015-05-04 14:03:42
azavea/nyc-trees
https://api.github.com/repos/azavea/nyc-trees
opened
Shorten width of species autocomplete list
design
From Parks: It is also hard to click away from the list, especially on the mobile screen - there is just not enough inactive white space to click on, to make the list go away. Perhaps this could be addressed by making the field slightly less wide than the screen, so that when the user scrolls down the screen to the next variable they do not accidentally trigger the species list, which is making them frustrated. Thoughts @designmatty ? This is likely also connected to the other species list issues described in #885
1.0
Shorten width of species autocomplete list - From Parks: It is also hard to click away from the list, especially on the mobile screen - there is just not enough inactive white space to click on, to make the list go away. Perhaps this could be addressed by making the field slightly less wide than the screen, so that when the user scrolls down the screen to the next variable they do not accidentally trigger the species list, which is making them frustrated. Thoughts @designmatty ? This is likely also connected to the other species list issues described in #885
non_process
shorten width of species autocomplete list from parks it is also hard to click away from the list especially on the mobile screen there is just not enough inactive white space to click on to make the list go away perhaps this could be addressed by making the field slightly less wide than the screen so that when the user scrolls down the screen to the next variable they do not accidentally trigger the species list which is making them frustrated thoughts designmatty this is likely also connected to the other species list issues described in
0
16,871
22,151,690,069
IssuesEvent
2022-06-03 17:33:35
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
opened
Release checklist 0.58
enhancement P1 process
### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [ ] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.58.0) for milestone - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Publish release ## Integration - [ ] Deploy to VM ## Performance - [ ] Deploy to Kubernetes - [ ] Deploy to VM - [ ] gRPC API performance tests - [ ] Importer performance tests - [ ] REST API performance tests - [ ] Migrations tested against mainnet clone ## Previewnet - [ ] Deploy to VM ## Testnet - [ ] Deploy to VM ## Mainnet - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA - [ ] Deploy to VM ### Alternatives _No response_
1.0
Release checklist 0.58 - ### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [ ] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.58.0) for milestone - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Publish release ## Integration - [ ] Deploy to VM ## Performance - [ ] Deploy to Kubernetes - [ ] Deploy to VM - [ ] gRPC API performance tests - [ ] Importer performance tests - [ ] REST API performance tests - [ ] Migrations tested against mainnet clone ## Previewnet - [ ] Deploy to VM ## Testnet - [ ] Deploy to VM ## Mainnet - [ ] Deploy to Kubernetes EU - [ ] Deploy to Kubernetes NA - [ ] Deploy to VM ### Alternatives _No response_
process
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on nothing for milestone github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm alternatives no response
1
94,918
11,940,832,225
IssuesEvent
2020-04-02 17:22:46
US-CBP/cbp-theme
https://api.github.com/repos/US-CBP/cbp-theme
closed
Label text color overridden when in an list group
design
Labels Text Color is overridden when in an anchor tag by: .list-group .list-group-item a { color: #333; } Where it should have the color from: .label { display: inline; padding: .2em .6em .3em; font-size: 75%; font-weight: 700; line-height: 1; **color: #fff;** text-align: center; white-space: nowrap; vertical-align: baseline; border-radius: .25em; } ![image](https://user-images.githubusercontent.com/1034574/39137490-b478ee68-46eb-11e8-8661-0f2e5f705998.png) cc @HMcFabulous
1.0
Label text color overridden when in an list group - Labels Text Color is overridden when in an anchor tag by: .list-group .list-group-item a { color: #333; } Where it should have the color from: .label { display: inline; padding: .2em .6em .3em; font-size: 75%; font-weight: 700; line-height: 1; **color: #fff;** text-align: center; white-space: nowrap; vertical-align: baseline; border-radius: .25em; } ![image](https://user-images.githubusercontent.com/1034574/39137490-b478ee68-46eb-11e8-8661-0f2e5f705998.png) cc @HMcFabulous
non_process
label text color overridden when in an list group labels text color is overridden when in an anchor tag by list group list group item a color where it should have the color from label display inline padding font size font weight line height color fff text align center white space nowrap vertical align baseline border radius cc hmcfabulous
0
1,638
4,259,131,322
IssuesEvent
2016-07-11 09:52:38
e-government-ua/iBP
https://api.github.com/repos/e-government-ua/iBP
closed
Коростень: исправление атрибутов
In process of testing in work test
В реквизитах ЦНАП города Коростеня: 1) необходимо сменить номер телефона (там сейчас ошибочно указан номер районного ЦНАП). Правильный номер: 0 (4142) 50146; 2) добавить номер кабинета: каб. №11
1.0
Коростень: исправление атрибутов - В реквизитах ЦНАП города Коростеня: 1) необходимо сменить номер телефона (там сейчас ошибочно указан номер районного ЦНАП). Правильный номер: 0 (4142) 50146; 2) добавить номер кабинета: каб. №11
process
коростень исправление атрибутов в реквизитах цнап города коростеня необходимо сменить номер телефона там сейчас ошибочно указан номер районного цнап правильный номер добавить номер кабинета каб №
1
22,021
30,533,899,894
IssuesEvent
2023-07-19 15:55:29
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Merge and rename 'autophagy involved in symbiotic interaction' and children
multi-species process
Hello, I think this 'autophagy involved in symbiotic interaction' triad should be cleaned: - GO:0075071 autophagy involved in symbiotic interaction -- GO:0075044 autophagy of host cells involved in interaction with symbiont --- GO:0075072 autophagy of symbiont cells involved in interaction with host ---- GO:0075073 autophagy of symbiont cells on or near host surface Only 'GO:0075044 autophagy of host cells involved in interaction with symbiont' has been used - it looks like a defense mechanism to clear infection. Pascale
1.0
Merge and rename 'autophagy involved in symbiotic interaction' and children - Hello, I think this 'autophagy involved in symbiotic interaction' triad should be cleaned: - GO:0075071 autophagy involved in symbiotic interaction -- GO:0075044 autophagy of host cells involved in interaction with symbiont --- GO:0075072 autophagy of symbiont cells involved in interaction with host ---- GO:0075073 autophagy of symbiont cells on or near host surface Only 'GO:0075044 autophagy of host cells involved in interaction with symbiont' has been used - it looks like a defense mechanism to clear infection. Pascale
process
merge and rename autophagy involved in symbiotic interaction and children hello i think this autophagy involved in symbiotic interaction triad should be cleaned go autophagy involved in symbiotic interaction go autophagy of host cells involved in interaction with symbiont go autophagy of symbiont cells involved in interaction with host go autophagy of symbiont cells on or near host surface only go autophagy of host cells involved in interaction with symbiont has been used it looks like a defense mechanism to clear infection pascale
1
13,825
5,468,080,436
IssuesEvent
2017-03-10 04:06:58
docker/docker
https://api.github.com/repos/docker/docker
closed
Exporting volume from container running in windows 10 insider preview doesn't work
area/builder area/bundles platform/windows version/1.13
**Description** I am trying the latest windows 10 insider preview build with docker overlay driver support. I have a image thats build with the below `Dockerfile` ``` FROM microsoft/windowsservercore COPY . C:/temp SHELL ["powershell", "-Command"] RUN New-Item c:\Qumram -Type directory; \ New-Item C:\Qumram\start.ps1 -Type File -Value \ 'Param( \ [string]$LogFilePath = \"C:\Qumram\config\config.log\" \ ) \ Copy-Item C:\temp\* C:\Qumram\config -Recurse | Out-Null; \ New-Item -Path $LogFilePath -Itemtype file -Force | Out-Null; \ Get-Content -Path $LogFilePath -Wait;' VOLUME C:\\Qumram\\config SHELL ["powershell", "-File"] HEALTHCHECK CMD exit 0 ENTRYPOINT C:\Qumram\start.ps1 ``` And i deploy a stack using the below `compose.yml` file ``` version: '3.0' volumes: windows-configuration: services: config-windows-volume: image: josejiby/qumram:testconfig-windows-dev volumes: - windows-configuration:\\c\Qumram\config deploy: mode: replicated replicas: 1 placement: constraints: - engine.labels.type == windows restart_policy: condition: on-failure ``` I used to use `volumes_from` in the earlier 2.0 format in windowsservercore to share the volume, however i am using the new v3 format hence the named volume in compose file. **Steps to reproduce the issue:** 1. I run `docker -H linuxhost stack deploy -c compose.yml test 2. It schedules and starts running the container 3. The container fails throwing an error `starting container failed: container a85162e691c88570cc097eb709862659838d6e68291bf55f6aa1672b887bbc7c encountered an error during Start: failure in a Windows system call: The compute system exited unexpectedly. (0xc0370106)` **Describe the results you received:** The container should have started exporting the folder as a named volume **Describe the results you expected:** The container crashes **Additional information you deem important (e.g. issue happens only occasionally):** **Output of `docker version`:** ``` Client: Version: 1.13.1 API version: 1.26 Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 08:47:51 2017 OS/Arch: darwin/amd64 Server: Version: 1.13.1 API version: 1.26 (minimum version 1.24) Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 08:47:51 2017 OS/Arch: windows/amd64 Experimental: false ``` **Output of `docker info`:** ``` Containers: 5 Running: 0 Paused: 0 Stopped: 5 Images: 57 Server Version: 1.13.1 Storage Driver: windowsfilter Windows: Logging Driver: json-file Plugins: Volume: local Network: l2bridge l2tunnel nat null overlay transparent Swarm: active NodeID: mzpe8u9g1448u9aw7ph2s8wub Is Manager: false Node Address: 10.211.55.7 Manager Addresses: 10.211.55.8:2377 Default Isolation: hyperv Kernel Version: 10.0 15031 (15031.0.amd64fre.rs2_release.170204-1546) Operating System: Windows 10 Enterprise Insider Preview OSType: windows Architecture: x86_64 CPUs: 1 Total Memory: 1.983 GiB Name: windows ID: 5262:FAUI:VLWB:COL4:Z3HH:5XGE:DBPD:HFBS:VYXB:CNAY:VKV3:CGFT Docker Root Dir: C:\ProgramData\docker Debug Mode (client): false Debug Mode (server): false Username: josejiby Registry: https://index.docker.io/v1/ Labels: type=windows Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false ``` **Additional environment details (AWS, VirtualBox, physical, etc.):** The windows host is running inside parallels along side an ubuntu vm, which is the master of the swarm.
1.0
Exporting volume from container running in windows 10 insider preview doesn't work - **Description** I am trying the latest windows 10 insider preview build with docker overlay driver support. I have a image thats build with the below `Dockerfile` ``` FROM microsoft/windowsservercore COPY . C:/temp SHELL ["powershell", "-Command"] RUN New-Item c:\Qumram -Type directory; \ New-Item C:\Qumram\start.ps1 -Type File -Value \ 'Param( \ [string]$LogFilePath = \"C:\Qumram\config\config.log\" \ ) \ Copy-Item C:\temp\* C:\Qumram\config -Recurse | Out-Null; \ New-Item -Path $LogFilePath -Itemtype file -Force | Out-Null; \ Get-Content -Path $LogFilePath -Wait;' VOLUME C:\\Qumram\\config SHELL ["powershell", "-File"] HEALTHCHECK CMD exit 0 ENTRYPOINT C:\Qumram\start.ps1 ``` And i deploy a stack using the below `compose.yml` file ``` version: '3.0' volumes: windows-configuration: services: config-windows-volume: image: josejiby/qumram:testconfig-windows-dev volumes: - windows-configuration:\\c\Qumram\config deploy: mode: replicated replicas: 1 placement: constraints: - engine.labels.type == windows restart_policy: condition: on-failure ``` I used to use `volumes_from` in the earlier 2.0 format in windowsservercore to share the volume, however i am using the new v3 format hence the named volume in compose file. **Steps to reproduce the issue:** 1. I run `docker -H linuxhost stack deploy -c compose.yml test 2. It schedules and starts running the container 3. The container fails throwing an error `starting container failed: container a85162e691c88570cc097eb709862659838d6e68291bf55f6aa1672b887bbc7c encountered an error during Start: failure in a Windows system call: The compute system exited unexpectedly. (0xc0370106)` **Describe the results you received:** The container should have started exporting the folder as a named volume **Describe the results you expected:** The container crashes **Additional information you deem important (e.g. issue happens only occasionally):** **Output of `docker version`:** ``` Client: Version: 1.13.1 API version: 1.26 Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 08:47:51 2017 OS/Arch: darwin/amd64 Server: Version: 1.13.1 API version: 1.26 (minimum version 1.24) Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 08:47:51 2017 OS/Arch: windows/amd64 Experimental: false ``` **Output of `docker info`:** ``` Containers: 5 Running: 0 Paused: 0 Stopped: 5 Images: 57 Server Version: 1.13.1 Storage Driver: windowsfilter Windows: Logging Driver: json-file Plugins: Volume: local Network: l2bridge l2tunnel nat null overlay transparent Swarm: active NodeID: mzpe8u9g1448u9aw7ph2s8wub Is Manager: false Node Address: 10.211.55.7 Manager Addresses: 10.211.55.8:2377 Default Isolation: hyperv Kernel Version: 10.0 15031 (15031.0.amd64fre.rs2_release.170204-1546) Operating System: Windows 10 Enterprise Insider Preview OSType: windows Architecture: x86_64 CPUs: 1 Total Memory: 1.983 GiB Name: windows ID: 5262:FAUI:VLWB:COL4:Z3HH:5XGE:DBPD:HFBS:VYXB:CNAY:VKV3:CGFT Docker Root Dir: C:\ProgramData\docker Debug Mode (client): false Debug Mode (server): false Username: josejiby Registry: https://index.docker.io/v1/ Labels: type=windows Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false ``` **Additional environment details (AWS, VirtualBox, physical, etc.):** The windows host is running inside parallels along side an ubuntu vm, which is the master of the swarm.
non_process
exporting volume from container running in windows insider preview doesn t work description i am trying the latest windows insider preview build with docker overlay driver support i have a image thats build with the below dockerfile from microsoft windowsservercore copy c temp shell run new item c qumram type directory new item c qumram start type file value param logfilepath c qumram config config log copy item c temp c qumram config recurse out null new item path logfilepath itemtype file force out null get content path logfilepath wait volume c qumram config shell healthcheck cmd exit entrypoint c qumram start and i deploy a stack using the below compose yml file version volumes windows configuration services config windows volume image josejiby qumram testconfig windows dev volumes windows configuration c qumram config deploy mode replicated replicas placement constraints engine labels type windows restart policy condition on failure i used to use volumes from in the earlier format in windowsservercore to share the volume however i am using the new format hence the named volume in compose file steps to reproduce the issue i run docker h linuxhost stack deploy c compose yml test it schedules and starts running the container the container fails throwing an error starting container failed container encountered an error during start failure in a windows system call the compute system exited unexpectedly describe the results you received the container should have started exporting the folder as a named volume describe the results you expected the container crashes additional information you deem important e g issue happens only occasionally output of docker version client version api version go version git commit built wed feb os arch darwin server version api version minimum version go version git commit built wed feb os arch windows experimental false output of docker info containers running paused stopped images server version storage driver windowsfilter windows logging driver json file plugins volume local network nat null overlay transparent swarm active nodeid is manager false node address manager addresses default isolation hyperv kernel version release operating system windows enterprise insider preview ostype windows architecture cpus total memory gib name windows id faui vlwb dbpd hfbs vyxb cnay cgft docker root dir c programdata docker debug mode client false debug mode server false username josejiby registry labels type windows experimental false insecure registries live restore enabled false additional environment details aws virtualbox physical etc the windows host is running inside parallels along side an ubuntu vm which is the master of the swarm
0
14,743
8,679,685,436
IssuesEvent
2018-12-01 01:32:23
vim/vim
https://api.github.com/repos/vim/vim
closed
Feature Request: cache syntax highlighting to improve 'relativenumber' scroll performance
performance
Hello Vim team, Spawning a new feature request from the existing [Ruby high CPU usage #282 issue](https://github.com/vim/vim/issues/282). Note, the [vim-ruby #243 issue](https://github.com/vim-ruby/vim-ruby/issues/243) is fundamentally the same issue as well. Basically the issue boils down to syntax highlighting needing to be calculated for every visible line when scrolling up or down with *relativenumber* enabled. The more lines visible and the slower the host CPU the worse the performance hit especially for languages with complex syntax highlighters such as Ruby. In issue [#282](https://github.com/vim/vim/issues/282#issuecomment-164013461) Bram said this (Dec 2015): > When moving the cursor around, with 'relativenumber' or 'cursorcolumn' > set, cause a full redraw, because every line changes. I'm wondering if > it is possible to cache the result of syntax highlighting. It would > require storing the "attr" value, before it's mixed with other > attributes, such as from 'cursorcolumn'. So long as we don't scroll the > screen update would be much faster. These days memory is plenty, thus > that isn't a problem. > > As soon as you page forward it's just as slow, of course, thus making > the syntax highlighting faster is still desired. I suspect, if implemented, that this suggestion indeed would greatly help scroll performance for those of us who enable *relativenumber*. A new setting possibly? *cachesyntax*? I leave it to the Vim masters to either: do this, or not do this, or schedule later, or close if too hard. Thank you.
True
Feature Request: cache syntax highlighting to improve 'relativenumber' scroll performance - Hello Vim team, Spawning a new feature request from the existing [Ruby high CPU usage #282 issue](https://github.com/vim/vim/issues/282). Note, the [vim-ruby #243 issue](https://github.com/vim-ruby/vim-ruby/issues/243) is fundamentally the same issue as well. Basically the issue boils down to syntax highlighting needing to be calculated for every visible line when scrolling up or down with *relativenumber* enabled. The more lines visible and the slower the host CPU the worse the performance hit especially for languages with complex syntax highlighters such as Ruby. In issue [#282](https://github.com/vim/vim/issues/282#issuecomment-164013461) Bram said this (Dec 2015): > When moving the cursor around, with 'relativenumber' or 'cursorcolumn' > set, cause a full redraw, because every line changes. I'm wondering if > it is possible to cache the result of syntax highlighting. It would > require storing the "attr" value, before it's mixed with other > attributes, such as from 'cursorcolumn'. So long as we don't scroll the > screen update would be much faster. These days memory is plenty, thus > that isn't a problem. > > As soon as you page forward it's just as slow, of course, thus making > the syntax highlighting faster is still desired. I suspect, if implemented, that this suggestion indeed would greatly help scroll performance for those of us who enable *relativenumber*. A new setting possibly? *cachesyntax*? I leave it to the Vim masters to either: do this, or not do this, or schedule later, or close if too hard. Thank you.
non_process
feature request cache syntax highlighting to improve relativenumber scroll performance hello vim team spawning a new feature request from the existing note the is fundamentally the same issue as well basically the issue boils down to syntax highlighting needing to be calculated for every visible line when scrolling up or down with relativenumber enabled the more lines visible and the slower the host cpu the worse the performance hit especially for languages with complex syntax highlighters such as ruby in issue bram said this dec when moving the cursor around with relativenumber or cursorcolumn set cause a full redraw because every line changes i m wondering if it is possible to cache the result of syntax highlighting it would require storing the attr value before it s mixed with other attributes such as from cursorcolumn so long as we don t scroll the screen update would be much faster these days memory is plenty thus that isn t a problem as soon as you page forward it s just as slow of course thus making the syntax highlighting faster is still desired i suspect if implemented that this suggestion indeed would greatly help scroll performance for those of us who enable relativenumber a new setting possibly cachesyntax i leave it to the vim masters to either do this or not do this or schedule later or close if too hard thank you
0
422,851
12,287,489,099
IssuesEvent
2020-05-09 12:26:48
googleapis/elixir-google-api
https://api.github.com/repos/googleapis/elixir-google-api
opened
Synthesis failed for Slides
api: slides autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate Slides. :broken_heart: Here's the output from running `synth.py`: ``` 2020-05-09 05:19:53 [INFO] logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api 2020-05-09 05:19:53,834 autosynth > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api Switched to branch 'autosynth-slides' 2020-05-09 05:19:55 [INFO] Running synthtool 2020-05-09 05:19:55,412 autosynth > Running synthtool 2020-05-09 05:19:55 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/slides/synth.metadata', 'synth.py', '--'] 2020-05-09 05:19:55,412 autosynth > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/slides/synth.metadata', 'synth.py', '--'] 2020-05-09 05:19:55,618 synthtool > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py. On branch autosynth-slides nothing to commit, working tree clean 2020-05-09 05:19:55,694 synthtool > Cloning https://github.com/googleapis/elixir-google-api.git. 2020-05-09 05:19:56,145 synthtool > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/elixir-google-api:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Slides 2020-05-09 05:20:01,126 synthtool > No files in sources /home/kbuilder/.cache/synthtool/elixir-google-api/clients were copied. Does the source contain files? Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 180, in __exit__ write(self.metadata_file_path) File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 112, in write with open(outfile, "w") as fh: FileNotFoundError: [Errno 2] No such file or directory: 'clients/slides/synth.metadata' 2020-05-09 05:20:01 [ERROR] Synthesis failed 2020-05-09 05:20:01,156 autosynth > Synthesis failed Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 599, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 471, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 549, in _inner_main ).synthesize(base_synth_log_path) File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 118, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/slides/synth.metadata', 'synth.py', '--', 'Slides']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](https://sponge/11ff3741-9158-4831-8681-fff828f77e1a).
1.0
Synthesis failed for Slides - Hello! Autosynth couldn't regenerate Slides. :broken_heart: Here's the output from running `synth.py`: ``` 2020-05-09 05:19:53 [INFO] logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api 2020-05-09 05:19:53,834 autosynth > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api Switched to branch 'autosynth-slides' 2020-05-09 05:19:55 [INFO] Running synthtool 2020-05-09 05:19:55,412 autosynth > Running synthtool 2020-05-09 05:19:55 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/slides/synth.metadata', 'synth.py', '--'] 2020-05-09 05:19:55,412 autosynth > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/slides/synth.metadata', 'synth.py', '--'] 2020-05-09 05:19:55,618 synthtool > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py. On branch autosynth-slides nothing to commit, working tree clean 2020-05-09 05:19:55,694 synthtool > Cloning https://github.com/googleapis/elixir-google-api.git. 2020-05-09 05:19:56,145 synthtool > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/elixir-google-api:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Slides 2020-05-09 05:20:01,126 synthtool > No files in sources /home/kbuilder/.cache/synthtool/elixir-google-api/clients were copied. Does the source contain files? Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 180, in __exit__ write(self.metadata_file_path) File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 112, in write with open(outfile, "w") as fh: FileNotFoundError: [Errno 2] No such file or directory: 'clients/slides/synth.metadata' 2020-05-09 05:20:01 [ERROR] Synthesis failed 2020-05-09 05:20:01,156 autosynth > Synthesis failed Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 599, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 471, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 549, in _inner_main ).synthesize(base_synth_log_path) File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 118, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/slides/synth.metadata', 'synth.py', '--', 'Slides']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](https://sponge/11ff3741-9158-4831-8681-fff828f77e1a).
non_process
synthesis failed for slides hello autosynth couldn t regenerate slides broken heart here s the output from running synth py logs will be written to tmpfs src github synthtool logs googleapis elixir google api autosynth logs will be written to tmpfs src github synthtool logs googleapis elixir google api switched to branch autosynth slides running synthtool autosynth running synthtool autosynth synthtool executing home kbuilder cache synthtool elixir google api synth py on branch autosynth slides nothing to commit working tree clean synthtool cloning synthtool running docker run rm v home kbuilder cache synthtool elixir google api workspace v var run docker sock var run docker sock e user group w workspace gcr io cloud devrel public resources scripts generate client sh slides synthtool no files in sources home kbuilder cache synthtool elixir google api clients were copied does the source contain files traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file tmpfs src github synthtool synthtool metadata py line in exit write self metadata file path file tmpfs src github synthtool synthtool metadata py line in write with open outfile w as fh filenotfounderror no such file or directory clients slides synth metadata synthesis failed autosynth synthesis failed traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize base synth log path file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
0
10,548
13,327,743,107
IssuesEvent
2020-08-27 13:36:09
prisma/prisma-client-js
https://api.github.com/repos/prisma/prisma-client-js
closed
Unmigrated database leads to minimal error message for missing table
kind/improvement process/candidate size/XS topic: error
When e.g. running Prisma Client in a Netlify function, this is what the error output looks when you did not migrate your database yet (and all tables are missing): ``` ◈ Error during invocation: { errorMessage: 'Error occurred during query execution:\n' + 'ConnectorError(ConnectorError { user_facing_error: None, kind: TableDoesNotExist { table: "images2.RequestLog" } })', errorType: 'Error', ``` The error message here could be much more helpful, e.g. suggesting the option that the database might be empty and not migrated yet or similar.
1.0
Unmigrated database leads to minimal error message for missing table - When e.g. running Prisma Client in a Netlify function, this is what the error output looks when you did not migrate your database yet (and all tables are missing): ``` ◈ Error during invocation: { errorMessage: 'Error occurred during query execution:\n' + 'ConnectorError(ConnectorError { user_facing_error: None, kind: TableDoesNotExist { table: "images2.RequestLog" } })', errorType: 'Error', ``` The error message here could be much more helpful, e.g. suggesting the option that the database might be empty and not migrated yet or similar.
process
unmigrated database leads to minimal error message for missing table when e g running prisma client in a netlify function this is what the error output looks when you did not migrate your database yet and all tables are missing ◈ error during invocation errormessage error occurred during query execution n connectorerror connectorerror user facing error none kind tabledoesnotexist table requestlog errortype error the error message here could be much more helpful e g suggesting the option that the database might be empty and not migrated yet or similar
1
19,703
26,052,917,679
IssuesEvent
2022-12-22 20:48:50
aolabNeuro/analyze
https://api.github.com/repos/aolabNeuro/analyze
closed
eye samplerate
help wanted preprocessing
the sampling rate of eye data from rig1 is ~240 hz, synced with optitrack. and yet we are saving the full 25khz sampled version from the ecube into *_eye.hdf. which makes for really big files and wasted space. I think we can safely downsample to 240hz without losing any resolution.
1.0
eye samplerate - the sampling rate of eye data from rig1 is ~240 hz, synced with optitrack. and yet we are saving the full 25khz sampled version from the ecube into *_eye.hdf. which makes for really big files and wasted space. I think we can safely downsample to 240hz without losing any resolution.
process
eye samplerate the sampling rate of eye data from is hz synced with optitrack and yet we are saving the full sampled version from the ecube into eye hdf which makes for really big files and wasted space i think we can safely downsample to without losing any resolution
1
728,127
25,067,212,962
IssuesEvent
2022-11-07 09:18:15
TencentBlueKing/bk-nodeman
https://api.github.com/repos/TencentBlueKing/bk-nodeman
closed
[BUG] 2.0 Agent 注册及健康检查上报错误信息缺失
kind/bug version/V2.2.X priority/middle module/script
**问题描述** 2.0 Agent 注册及健康检查上报错误信息缺失 **重现方法** 1. 安装 Agent 或 Proxy 2. 让 register / healthz 报错 **预期现象** 把报错信息上报到后台,例如:xxx failed:xxxx **截屏** ![WeChatWorkScreenshot_4f39a793-ccb9-43f8-b949-e57861e6d10e](https://user-images.githubusercontent.com/42019787/197098529-618ccc78-8a0a-4c15-851f-c790330c9d3c.png) **请提供以下信息** - [x] bk-nodeman 版本 (发布版本号 或 git tag): <!-- `示例: V3.1.32-ce 或者 git sha. 请不要使用 "最新版本" 或 "当前版本"等无法准确定位代码版本的语句描述` --> - [ ] 蓝鲸PaaS 版本:<!-- `<示例:PaaS 3.0.58、PaaSAgent 3.0.9` --> - [ ] bk-nodeman 异常日志:
1.0
[BUG] 2.0 Agent 注册及健康检查上报错误信息缺失 - **问题描述** 2.0 Agent 注册及健康检查上报错误信息缺失 **重现方法** 1. 安装 Agent 或 Proxy 2. 让 register / healthz 报错 **预期现象** 把报错信息上报到后台,例如:xxx failed:xxxx **截屏** ![WeChatWorkScreenshot_4f39a793-ccb9-43f8-b949-e57861e6d10e](https://user-images.githubusercontent.com/42019787/197098529-618ccc78-8a0a-4c15-851f-c790330c9d3c.png) **请提供以下信息** - [x] bk-nodeman 版本 (发布版本号 或 git tag): <!-- `示例: V3.1.32-ce 或者 git sha. 请不要使用 "最新版本" 或 "当前版本"等无法准确定位代码版本的语句描述` --> - [ ] 蓝鲸PaaS 版本:<!-- `<示例:PaaS 3.0.58、PaaSAgent 3.0.9` --> - [ ] bk-nodeman 异常日志:
non_process
agent 注册及健康检查上报错误信息缺失 问题描述 agent 注册及健康检查上报错误信息缺失 重现方法 安装 agent 或 proxy 让 register healthz 报错 预期现象 把报错信息上报到后台,例如:xxx failed:xxxx 截屏 请提供以下信息 bk nodeman 版本 发布版本号 或 git tag : 蓝鲸paas 版本: bk nodeman 异常日志:
0
117,887
17,567,038,849
IssuesEvent
2021-08-14 00:09:48
MythicDrops/worldguard-adapters
https://api.github.com/repos/MythicDrops/worldguard-adapters
closed
CVE-2020-7774 (High) detected in y18n-4.0.0.tgz - autoclosed
security vulnerability
## CVE-2020-7774 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary> <p>the bare-bones internationalization library used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p> <p>Path to dependency file: worldguard-adapters/package.json</p> <p>Path to vulnerable library: worldguard-adapters/node_modules/npm/node_modules/y18n/package.json</p> <p> Dependency Hierarchy: - semantic-release-17.3.9.tgz (Root Library) - npm-7.0.10.tgz - npm-6.14.11.tgz - cacache-12.0.3.tgz - :x: **y18n-4.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/MythicDrops/worldguard-adapters/commit/b82053385ceab5e0b18a03505fd78943aa4f3a59">b82053385ceab5e0b18a03505fd78943aa4f3a59</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package y18n before 3.2.2, 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true <p>Publish Date: 2020-11-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1654">https://www.npmjs.com/advisories/1654</a></p> <p>Release Date: 2020-11-17</p> <p>Fix Resolution: 3.2.2, 4.0.1, 5.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7774 (High) detected in y18n-4.0.0.tgz - autoclosed - ## CVE-2020-7774 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary> <p>the bare-bones internationalization library used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p> <p>Path to dependency file: worldguard-adapters/package.json</p> <p>Path to vulnerable library: worldguard-adapters/node_modules/npm/node_modules/y18n/package.json</p> <p> Dependency Hierarchy: - semantic-release-17.3.9.tgz (Root Library) - npm-7.0.10.tgz - npm-6.14.11.tgz - cacache-12.0.3.tgz - :x: **y18n-4.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/MythicDrops/worldguard-adapters/commit/b82053385ceab5e0b18a03505fd78943aa4f3a59">b82053385ceab5e0b18a03505fd78943aa4f3a59</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package y18n before 3.2.2, 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true <p>Publish Date: 2020-11-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1654">https://www.npmjs.com/advisories/1654</a></p> <p>Release Date: 2020-11-17</p> <p>Fix Resolution: 3.2.2, 4.0.1, 5.0.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in tgz autoclosed cve high severity vulnerability vulnerable library tgz the bare bones internationalization library used by yargs library home page a href path to dependency file worldguard adapters package json path to vulnerable library worldguard adapters node modules npm node modules package json dependency hierarchy semantic release tgz root library npm tgz npm tgz cacache tgz x tgz vulnerable library found in head commit a href found in base branch main vulnerability details this affects the package before and poc by const require setlocale proto updatelocale polluted true console log polluted true publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
759,184
26,582,955,055
IssuesEvent
2023-01-22 17:32:42
dotnet/iot
https://api.github.com/repos/dotnet/iot
closed
Color Sensor's code not working
bug Priority:3
Each mode of the ColorSensorMode is not working except the mode "Color". RGB, Reflexion, Ambient are giving None and value equals 0. ```csharp static void Main(string[] args) { Brick _brick = new Brick(); EV3ColorSensor color = new EV3ColorSensor(_brick, SensorPort.Port3); color.ColorMode = ColorSensorMode.Ambient; int count = 0; while (count < 100) { Console.WriteLine(color.ReadColor()); var value = color.Read(); Console.WriteLine(value + "%"); if (value == 0) { color.SelectNextMode(); Console.WriteLine("Mode changed" + color.SelectedMode()); } Task.Delay(300).Wait(); } } ``` **Expected behavior** That's testing each mode, if 0 given : swap mode. Result : None 0% Mode changedReflection None 0% Mode changedGreen None 0% Mode changedBlue None 0% Mode changedRed None 0% Mode changedColor Brown 7% (not percentage) Last Version Used Add following information: - `dotnet --info` : SDK .NET Core (reflétant tous les global.json) : Version: 3.0.100 - `dotnet --info` on the machine where app is being run (not applicable for self-contained apps) - Version of `System.Device.Gpio` package : Last Version - Version of `Iot.Device.Bindings` package : Last Version
1.0
Color Sensor's code not working - Each mode of the ColorSensorMode is not working except the mode "Color". RGB, Reflexion, Ambient are giving None and value equals 0. ```csharp static void Main(string[] args) { Brick _brick = new Brick(); EV3ColorSensor color = new EV3ColorSensor(_brick, SensorPort.Port3); color.ColorMode = ColorSensorMode.Ambient; int count = 0; while (count < 100) { Console.WriteLine(color.ReadColor()); var value = color.Read(); Console.WriteLine(value + "%"); if (value == 0) { color.SelectNextMode(); Console.WriteLine("Mode changed" + color.SelectedMode()); } Task.Delay(300).Wait(); } } ``` **Expected behavior** That's testing each mode, if 0 given : swap mode. Result : None 0% Mode changedReflection None 0% Mode changedGreen None 0% Mode changedBlue None 0% Mode changedRed None 0% Mode changedColor Brown 7% (not percentage) Last Version Used Add following information: - `dotnet --info` : SDK .NET Core (reflétant tous les global.json) : Version: 3.0.100 - `dotnet --info` on the machine where app is being run (not applicable for self-contained apps) - Version of `System.Device.Gpio` package : Last Version - Version of `Iot.Device.Bindings` package : Last Version
non_process
color sensor s code not working each mode of the colorsensormode is not working except the mode color rgb reflexion ambient are giving none and value equals csharp static void main string args brick brick new brick color new brick sensorport color colormode colorsensormode ambient int count while count console writeline color readcolor var value color read console writeline value if value color selectnextmode console writeline mode changed color selectedmode task delay wait expected behavior that s testing each mode if given swap mode result none mode changedreflection none mode changedgreen none mode changedblue none mode changedred none mode changedcolor brown not percentage last version used add following information dotnet info sdk net core reflétant tous les global json   version dotnet info on the machine where app is being run not applicable for self contained apps version of system device gpio package last version version of iot device bindings package last version
0
22,227
30,775,922,839
IssuesEvent
2023-07-31 06:26:28
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Post-migration cleanup
automation/svc triaged assigned-to-author doc-enhancement process-automation/subsvc Pri2
The v1 boxes had a log analytics workspace that they wrote to. Not sure if others did it that way, but our hybrid workers wrote to their own workspace. It doesn't look like v2 needs it. If that's the case, shouldn't we be able to remove the workspace after we remove the v1 machine? --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: e46cfd4e-9663-145f-dd23-28effd108738 * Version Independent ID: f5126582-7211-65c9-0817-cef899f6aee5 * Content: [Migrate an existing agent-based hybrid workers to extension-based-workers in Azure Automation](https://learn.microsoft.com/en-us/azure/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers?tabs=bicep-template%2Cwin-hrw) * Content Source: [articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SnehaSudhirG * Microsoft Alias: **sudhirsneha**
1.0
Post-migration cleanup - The v1 boxes had a log analytics workspace that they wrote to. Not sure if others did it that way, but our hybrid workers wrote to their own workspace. It doesn't look like v2 needs it. If that's the case, shouldn't we be able to remove the workspace after we remove the v1 machine? --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: e46cfd4e-9663-145f-dd23-28effd108738 * Version Independent ID: f5126582-7211-65c9-0817-cef899f6aee5 * Content: [Migrate an existing agent-based hybrid workers to extension-based-workers in Azure Automation](https://learn.microsoft.com/en-us/azure/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers?tabs=bicep-template%2Cwin-hrw) * Content Source: [articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SnehaSudhirG * Microsoft Alias: **sudhirsneha**
process
post migration cleanup the boxes had a log analytics workspace that they wrote to not sure if others did it that way but our hybrid workers wrote to their own workspace it doesn t look like needs it if that s the case shouldn t we be able to remove the workspace after we remove the machine document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
1
103
2,539,518,211
IssuesEvent
2015-01-27 15:51:24
tinkerpop/tinkerpop3
https://api.github.com/repos/tinkerpop/tinkerpop3
closed
Default strategies for anonymous traversals
enhancement process
Trying to convert the old [Flow Rank Pattern](https://github.com/tinkerpop/gremlin/wiki/Flow-Rank-Pattern) code I stumbled across this issue: ``` gremlin> g = TinkerFactory.createModern() ==>tinkergraph[vertices:6 edges:6] gremlin> software = g.V().has(label, "software").toList() ==>v[3] ==>v[5] gremlin> __.inject(software).unfold().in("created").values("name").groupCount() Side effects do not have a value for provided key: null Display stack trace? [yN] N gremlin> __.inject(software).unfold().in("created").values("name").groupCount().cap() Side effects do not have a value for provided key: null Display stack trace? [yN] N ``` The workaround: ``` gremlin> import com.tinkerpop.gremlin.process.graph.strategy.* ... gremlin> t = __.inject(software).unfold().in("created").values("name").groupCount(); null ==>null gremlin> t.getStrategies().addStrategies(SideEffectCapStrategy.instance(), SideEffectRegistrationStrategy.instance()) ==>strategies[SideEffectCapStrategy, SideEffectRegistrationStrategy] gremlin> t ==>[peter:1, josh:2, marko:1] ``` Hence it would be nice to have some default strategies applied to anonymous traversals.
1.0
Default strategies for anonymous traversals - Trying to convert the old [Flow Rank Pattern](https://github.com/tinkerpop/gremlin/wiki/Flow-Rank-Pattern) code I stumbled across this issue: ``` gremlin> g = TinkerFactory.createModern() ==>tinkergraph[vertices:6 edges:6] gremlin> software = g.V().has(label, "software").toList() ==>v[3] ==>v[5] gremlin> __.inject(software).unfold().in("created").values("name").groupCount() Side effects do not have a value for provided key: null Display stack trace? [yN] N gremlin> __.inject(software).unfold().in("created").values("name").groupCount().cap() Side effects do not have a value for provided key: null Display stack trace? [yN] N ``` The workaround: ``` gremlin> import com.tinkerpop.gremlin.process.graph.strategy.* ... gremlin> t = __.inject(software).unfold().in("created").values("name").groupCount(); null ==>null gremlin> t.getStrategies().addStrategies(SideEffectCapStrategy.instance(), SideEffectRegistrationStrategy.instance()) ==>strategies[SideEffectCapStrategy, SideEffectRegistrationStrategy] gremlin> t ==>[peter:1, josh:2, marko:1] ``` Hence it would be nice to have some default strategies applied to anonymous traversals.
process
default strategies for anonymous traversals trying to convert the old code i stumbled across this issue gremlin g tinkerfactory createmodern tinkergraph gremlin software g v has label software tolist v v gremlin inject software unfold in created values name groupcount side effects do not have a value for provided key null display stack trace n gremlin inject software unfold in created values name groupcount cap side effects do not have a value for provided key null display stack trace n the workaround gremlin import com tinkerpop gremlin process graph strategy gremlin t inject software unfold in created values name groupcount null null gremlin t getstrategies addstrategies sideeffectcapstrategy instance sideeffectregistrationstrategy instance strategies gremlin t hence it would be nice to have some default strategies applied to anonymous traversals
1
6,572
6,517,449,292
IssuesEvent
2017-08-27 23:42:10
dmitrinesterenko/cs224d
https://api.github.com/repos/dmitrinesterenko/cs224d
opened
Add specific targeting for active EC2 spot instances by using the --filters parameter in the aws cli command
infrastructure
Given I want to login and describe only the active spot instances When I run the AWS CLI commands Then they will only return the instances in the state I want such as Name=status,value="Active" Note: --filters Name=,Value is the parameter to the CLI.
1.0
Add specific targeting for active EC2 spot instances by using the --filters parameter in the aws cli command - Given I want to login and describe only the active spot instances When I run the AWS CLI commands Then they will only return the instances in the state I want such as Name=status,value="Active" Note: --filters Name=,Value is the parameter to the CLI.
non_process
add specific targeting for active spot instances by using the filters parameter in the aws cli command given i want to login and describe only the active spot instances when i run the aws cli commands then they will only return the instances in the state i want such as name status value active note filters name value is the parameter to the cli
0
658,858
21,911,291,750
IssuesEvent
2022-05-21 04:45:21
bcgov/entity
https://api.github.com/repos/bcgov/entity
closed
sentry error with address [postal code ] being long
bug Priority3 Relationships
**Describe the bug in current situation** Account creation failed due to address being long. More details are in here. https://sentry.io/organizations/registries/issues/2593014062/?referrer=alert_email&environment=production On more investigation , it's found that the postal code gets certain portions of country as `postal_code: ‘V1M 0A4cana’,` which causes the issue. Can wait till ops ticket comes to actually work on the bug. **Link bug to the User Story** **Impact of this bug** Describe the impact, i.e. what the impact is, and number of users impacted. **Chance of Occurring (high/medium/low/very low)** **Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?** **Steps to Reproduce** as of now , steps to reproduce are unknown.
1.0
sentry error with address [postal code ] being long - **Describe the bug in current situation** Account creation failed due to address being long. More details are in here. https://sentry.io/organizations/registries/issues/2593014062/?referrer=alert_email&environment=production On more investigation , it's found that the postal code gets certain portions of country as `postal_code: ‘V1M 0A4cana’,` which causes the issue. Can wait till ops ticket comes to actually work on the bug. **Link bug to the User Story** **Impact of this bug** Describe the impact, i.e. what the impact is, and number of users impacted. **Chance of Occurring (high/medium/low/very low)** **Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?** **Steps to Reproduce** as of now , steps to reproduce are unknown.
non_process
sentry error with address being long describe the bug in current situation account creation failed due to address being long more details are in here on more investigation it s found that the postal code gets certain portions of country as postal code ‘ ’ which causes the issue can wait till ops ticket comes to actually work on the bug link bug to the user story impact of this bug describe the impact i e what the impact is and number of users impacted chance of occurring high medium low very low pre conditions which env any pre requesites or assumptions to execute steps steps to reproduce as of now steps to reproduce are unknown
0
20,006
26,480,199,893
IssuesEvent
2023-01-17 14:11:22
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
how to make a dependent repository hidden?
type: support / not a bug (process) team-ExternalDeps untriaged
### Description of the bug: my project import grpc and brpc at same time, for some reason, brpc must use openssl, but grpc use boringssl, so link will failed for duplicated symbols, so how to make boringssl only visible to grpc? ### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. _No response_ ### Which operating system are you running Bazel on? ubuntu 22.04 ### What is the output of `bazel info release`? 5.2.0 ### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel. _No response_ ### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ? _No response_ ### Have you found anything relevant by searching the web? _No response_ ### Any other information, logs, or outputs that you want to share? _No response_
1.0
how to make a dependent repository hidden? - ### Description of the bug: my project import grpc and brpc at same time, for some reason, brpc must use openssl, but grpc use boringssl, so link will failed for duplicated symbols, so how to make boringssl only visible to grpc? ### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. _No response_ ### Which operating system are you running Bazel on? ubuntu 22.04 ### What is the output of `bazel info release`? 5.2.0 ### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel. _No response_ ### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ? _No response_ ### Have you found anything relevant by searching the web? _No response_ ### Any other information, logs, or outputs that you want to share? _No response_
process
how to make a dependent repository hidden description of the bug my project import grpc and brpc at same time for some reason brpc must use openssl but grpc use boringssl so link will failed for duplicated symbols so how to make boringssl only visible to grpc what s the simplest easiest way to reproduce this bug please provide a minimal example if possible no response which operating system are you running bazel on ubuntu what is the output of bazel info release if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head no response have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
1
4,698
7,541,407,624
IssuesEvent
2018-04-17 09:41:12
tinyMediaManager/tinyMediaManager
https://api.github.com/repos/tinyMediaManager/tinyMediaManager
closed
Problem with scrape extrafanart
bug processing
Version: 2.9.8-SNAPSHOT - NIGHTLY Build: 2018-04-02 23:03 OS: Windows 7 6.1 JDK: 1.8.0_161 amd64 Oracle Corporation Hello Two questions: 1. At the moment the fanart has the option to choose 1920 or 1280 or 300 and TMM 3 3840. Is it possible to add the option to download all graphics to the extrafanart folder, regardless of size? And options for downloading many posters to the extrafanart folder? 2. For a week I have a problem with downloading extrafanart, quite often it will download 1 or 2 or not at all. Then, to try again, I have to remove the movie from the database and load it again and again it does not react to downloading. I do not know if this is a TMM or FanartTV problem. With fanartTV pulls hard but manually from the site is OK. [tmm_logs.zip](https://github.com/tinyMediaManager/tinyMediaManager/files/1889457/tmm_logs.zip) description It seems that the problem is with downloading from the site Fanart.TV. Graphics are loading slowly and from Movie.DB quickly. Then when downloading, if TMM encounters a download problem (probably Fanart.TV) - it stops downloading the next graphics. I tried to disable Fanart.TV in settings - from Movie.DB it downloads normally. However, it is a pity some fanarts from Fanart.TV :). It can make an option in the TMM settings to change the order in which the graphics are loaded. Now is the first FanartTV second the moviedb (reverse order)
1.0
Problem with scrape extrafanart - Version: 2.9.8-SNAPSHOT - NIGHTLY Build: 2018-04-02 23:03 OS: Windows 7 6.1 JDK: 1.8.0_161 amd64 Oracle Corporation Hello Two questions: 1. At the moment the fanart has the option to choose 1920 or 1280 or 300 and TMM 3 3840. Is it possible to add the option to download all graphics to the extrafanart folder, regardless of size? And options for downloading many posters to the extrafanart folder? 2. For a week I have a problem with downloading extrafanart, quite often it will download 1 or 2 or not at all. Then, to try again, I have to remove the movie from the database and load it again and again it does not react to downloading. I do not know if this is a TMM or FanartTV problem. With fanartTV pulls hard but manually from the site is OK. [tmm_logs.zip](https://github.com/tinyMediaManager/tinyMediaManager/files/1889457/tmm_logs.zip) description It seems that the problem is with downloading from the site Fanart.TV. Graphics are loading slowly and from Movie.DB quickly. Then when downloading, if TMM encounters a download problem (probably Fanart.TV) - it stops downloading the next graphics. I tried to disable Fanart.TV in settings - from Movie.DB it downloads normally. However, it is a pity some fanarts from Fanart.TV :). It can make an option in the TMM settings to change the order in which the graphics are loaded. Now is the first FanartTV second the moviedb (reverse order)
process
problem with scrape extrafanart version snapshot nightly build os windows jdk oracle corporation hello two questions at the moment the fanart has the option to choose or or and tmm is it possible to add the option to download all graphics to the extrafanart folder regardless of size and options for downloading many posters to the extrafanart folder for a week i have a problem with downloading extrafanart quite often it will download or or not at all then to try again i have to remove the movie from the database and load it again and again it does not react to downloading i do not know if this is a tmm or fanarttv problem with fanarttv pulls hard but manually from the site is ok description it seems that the problem is with downloading from the site fanart tv graphics are loading slowly and from movie db quickly then when downloading if tmm encounters a download problem probably fanart tv it stops downloading the next graphics i tried to disable fanart tv in settings from movie db it downloads normally however it is a pity some fanarts from fanart tv it can make an option in the tmm settings to change the order in which the graphics are loaded now is the first fanarttv second the moviedb reverse order
1
11,523
14,401,833,208
IssuesEvent
2020-12-03 14:12:56
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
closed
Naming convention for popgen stats and variables
process + tools
#100 added functions ``Fst`` and ``Tajimas_D`` to the API. It's not clear that these are the right naming choices and we should make a conscious choice about what the variable/function naming conventions are. The most obvious choice is to make everything lowercase, (i.e., ``f_st/stats_f_st`` and ``tajimas_d/stats_tajimas_d``), but arguably this is less readable that ``Fst/stats_Fst``. We'll probably end up with a whole pile of stats like Garud's H stats (#231), Patterson's F[2-4] stats, the D, R and R2 LD stats When we use letters to describe statistics, then the case does matter and it may be artificial to force them to be lowercase just for the sake of maintaining conventions ("A foolish consistency is the hobgoblin of little minds"). Ideally, we'd have descriptive names for things like we have for ``diversity`` and ``divergence``, but I don't think that's practical. The established conventions in the field is for "[authors] [combination of letters]" to denote a statistic. The question is, what convention should we adopt for rendering "[authors] [combination of letters]" as a function name/variable name in the output dataset. Related to #232 and #226
1.0
Naming convention for popgen stats and variables - #100 added functions ``Fst`` and ``Tajimas_D`` to the API. It's not clear that these are the right naming choices and we should make a conscious choice about what the variable/function naming conventions are. The most obvious choice is to make everything lowercase, (i.e., ``f_st/stats_f_st`` and ``tajimas_d/stats_tajimas_d``), but arguably this is less readable that ``Fst/stats_Fst``. We'll probably end up with a whole pile of stats like Garud's H stats (#231), Patterson's F[2-4] stats, the D, R and R2 LD stats When we use letters to describe statistics, then the case does matter and it may be artificial to force them to be lowercase just for the sake of maintaining conventions ("A foolish consistency is the hobgoblin of little minds"). Ideally, we'd have descriptive names for things like we have for ``diversity`` and ``divergence``, but I don't think that's practical. The established conventions in the field is for "[authors] [combination of letters]" to denote a statistic. The question is, what convention should we adopt for rendering "[authors] [combination of letters]" as a function name/variable name in the output dataset. Related to #232 and #226
process
naming convention for popgen stats and variables added functions fst and tajimas d to the api it s not clear that these are the right naming choices and we should make a conscious choice about what the variable function naming conventions are the most obvious choice is to make everything lowercase i e f st stats f st and tajimas d stats tajimas d but arguably this is less readable that fst stats fst we ll probably end up with a whole pile of stats like garud s h stats patterson s f stats the d r and ld stats when we use letters to describe statistics then the case does matter and it may be artificial to force them to be lowercase just for the sake of maintaining conventions a foolish consistency is the hobgoblin of little minds ideally we d have descriptive names for things like we have for diversity and divergence but i don t think that s practical the established conventions in the field is for to denote a statistic the question is what convention should we adopt for rendering as a function name variable name in the output dataset related to and
1
7,954
11,137,563,836
IssuesEvent
2019-12-20 19:42:31
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Pull/Display the education sort order from USAJOBS One Profile
Apply Process Requirements Ready State Dept.
Who: Applicants What: Pull default education sorting from USAJOBS Why: In order to truly have a one profile Acceptance Criteria: - Pull the sorted education data from USAJOBS One profile including the sort preference - Displayed the education data sorted in the USAJOBS sort order as the default on the Open Opps application Screen shot of USAJOBS One profile education sorting: ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/c7330240-1bec-4887-8920-c9cd8359d186) Related Tickets: 10/29/19 - TFS 39119 is in production which adds the sort on USAJOBS but it's not available yet in the API. 39267 should add it to the API in the next release
1.0
Pull/Display the education sort order from USAJOBS One Profile - Who: Applicants What: Pull default education sorting from USAJOBS Why: In order to truly have a one profile Acceptance Criteria: - Pull the sorted education data from USAJOBS One profile including the sort preference - Displayed the education data sorted in the USAJOBS sort order as the default on the Open Opps application Screen shot of USAJOBS One profile education sorting: ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/c7330240-1bec-4887-8920-c9cd8359d186) Related Tickets: 10/29/19 - TFS 39119 is in production which adds the sort on USAJOBS but it's not available yet in the API. 39267 should add it to the API in the next release
process
pull display the education sort order from usajobs one profile who applicants what pull default education sorting from usajobs why in order to truly have a one profile acceptance criteria pull the sorted education data from usajobs one profile including the sort preference displayed the education data sorted in the usajobs sort order as the default on the open opps application screen shot of usajobs one profile education sorting related tickets tfs is in production which adds the sort on usajobs but it s not available yet in the api should add it to the api in the next release
1
13,566
16,105,463,167
IssuesEvent
2021-04-27 14:28:18
asyml/forte
https://api.github.com/repos/asyml/forte
opened
Implement Stave as a processor
topic: processors
**Is your feature request related to a problem? Please describe.** I want to visualize results from Forte pipeline with Stave in a more streamlined way. Stave should be better integrated with Forte so that I do not need to manually boot a Stave instance every time I need visualization. **Describe the solution you'd like** Implement Stave as a processor that can be inserted into an existing forte pipeline for immediate visualization. **Describe alternatives you've considered** Can refer to the idea from [Streamlit](https://github.com/streamlit/streamlit). Streamlit uses react as well, so I think it is very possible to use their idea. **Additional context** Example: Original forte pipeline for serialization: > OntonotesReader -> NLTKSentenceSegmenter -> NLTKWordTokenizer -> NLTKPOSTagger -> PackNameJsonPackWriter To visualize the data, we can insert a `StaveProcessor` into the pipeline: > OntonotesReader -> NLTKSentenceSegmenter -> NLTKWordTokenizer -> NLTKPOSTagger -> **StaveProcessor** -> PackNameJsonPackWriter
1.0
Implement Stave as a processor - **Is your feature request related to a problem? Please describe.** I want to visualize results from Forte pipeline with Stave in a more streamlined way. Stave should be better integrated with Forte so that I do not need to manually boot a Stave instance every time I need visualization. **Describe the solution you'd like** Implement Stave as a processor that can be inserted into an existing forte pipeline for immediate visualization. **Describe alternatives you've considered** Can refer to the idea from [Streamlit](https://github.com/streamlit/streamlit). Streamlit uses react as well, so I think it is very possible to use their idea. **Additional context** Example: Original forte pipeline for serialization: > OntonotesReader -> NLTKSentenceSegmenter -> NLTKWordTokenizer -> NLTKPOSTagger -> PackNameJsonPackWriter To visualize the data, we can insert a `StaveProcessor` into the pipeline: > OntonotesReader -> NLTKSentenceSegmenter -> NLTKWordTokenizer -> NLTKPOSTagger -> **StaveProcessor** -> PackNameJsonPackWriter
process
implement stave as a processor is your feature request related to a problem please describe i want to visualize results from forte pipeline with stave in a more streamlined way stave should be better integrated with forte so that i do not need to manually boot a stave instance every time i need visualization describe the solution you d like implement stave as a processor that can be inserted into an existing forte pipeline for immediate visualization describe alternatives you ve considered can refer to the idea from streamlit uses react as well so i think it is very possible to use their idea additional context example original forte pipeline for serialization ontonotesreader nltksentencesegmenter nltkwordtokenizer nltkpostagger packnamejsonpackwriter to visualize the data we can insert a staveprocessor into the pipeline ontonotesreader nltksentencesegmenter nltkwordtokenizer nltkpostagger staveprocessor packnamejsonpackwriter
1
234,750
25,886,312,345
IssuesEvent
2022-12-14 14:47:48
niklasschreiber/vulner-2
https://api.github.com/repos/niklasschreiber/vulner-2
opened
log4j-1.2.15.jar: 7 vulnerabilities (highest severity is: 9.8)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.15.jar</b></p></summary> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (log4j version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-23305](https://www.mend.io/vulnerability-database/CVE-2022-23305) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.2 | &#10060; | | [CVE-2019-17571](https://www.mend.io/vulnerability-database/CVE-2019-17571) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | log4j-1.2.15.jar | Direct | log4j-manual - 1.2.17-16;log4j-javadoc - 1.2.17-16;log4j - 1.2.17-16,1.2.17-16 | &#10060; | | [CVE-2020-9493](https://www.mend.io/vulnerability-database/CVE-2020-9493) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.1 | &#10060; | | [CVE-2022-23307](https://www.mend.io/vulnerability-database/CVE-2022-23307) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.8 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.1 | &#10060; | | [CVE-2022-23302](https://www.mend.io/vulnerability-database/CVE-2022-23302) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.8 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.1 | &#10060; | | [CVE-2021-4104](https://www.mend.io/vulnerability-database/CVE-2021-4104) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | log4j-1.2.15.jar | Direct | uom-parent - 1.0.3-3.module,1.0.3-3.module;uom-se-javadoc - 1.0.4-3.module;parfait-examples - 0.5.4-4.module;log4j-manual - 1.2.17-16;si-units-javadoc - 0.6.5-2.module;unit-api - 1.0-5.module,1.0-5.module;unit-api-javadoc - 1.0-5.module;parfait - 0.5.4-4.module,0.5.4-4.module;log4j-javadoc - 1.2.17-16;uom-systems-javadoc - 0.7-1.module;uom-lib-javadoc - 1.0.1-6.module;uom-systems - 0.7-1.module,0.7-1.module;log4j - 1.2.17-16,1.2.17-16;uom-se - 1.0.4-3.module,1.0.4-3.module;uom-lib - 1.0.1-6.module,1.0.1-6.module;parfait-javadoc - 0.5.4-4.module;pcp-parfait-agent - 0.5.4-4.module;si-units - 0.6.5-2.module,0.6.5-2.module | &#10060; | | [CVE-2020-9488](https://www.mend.io/vulnerability-database/CVE-2020-9488) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.7 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.3 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-23305</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> By design, the JDBCAppender in Log4j 1.2.x accepts an SQL statement as a configuration parameter where the values to be inserted are converters from PatternLayout. The message converter, %m, is likely to always be included. This allows attackers to manipulate the SQL by entering crafted strings into input fields or headers of an application that are logged allowing unintended SQL queries to be executed. Note this issue only affects Log4j 1.x when specifically configured to use the JDBCAppender, which is not the default. Beginning in version 2.0-beta8, the JDBCAppender was re-introduced with proper support for parameterized SQL queries and further customization over the columns written to in logs. Apache Log4j 1.2 reached end of life in August 2015. Users should upgrade to Log4j 2 as it addresses numerous other issues from the previous versions. <p>Publish Date: 2022-01-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23305>CVE-2022-23305</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://reload4j.qos.ch/">https://reload4j.qos.ch/</a></p> <p>Release Date: 2022-01-18</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.2</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-17571</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17. <p>Publish Date: 2019-12-20 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-17571>CVE-2019-17571</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E">https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E</a></p> <p>Release Date: 2019-12-20</p> <p>Fix Resolution: log4j-manual - 1.2.17-16;log4j-javadoc - 1.2.17-16;log4j - 1.2.17-16,1.2.17-16</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-9493</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution. <p>Publish Date: 2021-06-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9493>CVE-2020-9493</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.openwall.com/lists/oss-security/2021/06/16/1">https://www.openwall.com/lists/oss-security/2021/06/16/1</a></p> <p>Release Date: 2021-06-16</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-23307</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> CVE-2020-9493 identified a deserialization issue that was present in Apache Chainsaw. Prior to Chainsaw V2.0 Chainsaw was a component of Apache Log4j 1.2.x where the same issue exists. <p>Publish Date: 2022-01-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23307>CVE-2022-23307</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>8.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-01-18</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-23302</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> JMSSink in all versions of Log4j 1.x is vulnerable to deserialization of untrusted data when the attacker has write access to the Log4j configuration or if the configuration references an LDAP service the attacker has access to. The attacker can provide a TopicConnectionFactoryBindingName configuration causing JMSSink to perform JNDI requests that result in remote code execution in a similar fashion to CVE-2021-4104. Note this issue only affects Log4j 1.x when specifically configured to use JMSSink, which is not the default. Apache Log4j 1.2 reached end of life in August 2015. Users should upgrade to Log4j 2 as it addresses numerous other issues from the previous versions. <p>Publish Date: 2022-01-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23302>CVE-2022-23302</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>8.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://reload4j.qos.ch/">https://reload4j.qos.ch/</a></p> <p>Release Date: 2022-01-18</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-4104</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> JMSAppender in Log4j 1.2 is vulnerable to deserialization of untrusted data when the attacker has write access to the Log4j configuration. The attacker can provide TopicBindingName and TopicConnectionFactoryBindingName configurations causing JMSAppender to perform JNDI requests that result in remote code execution in a similar fashion to CVE-2021-44228. Note this issue only affects Log4j 1.2 when specifically configured to use JMSAppender, which is not the default. Apache Log4j 1.2 reached end of life in August 2015. Users should upgrade to Log4j 2 as it addresses numerous other issues from the previous versions. <p>Publish Date: 2021-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-4104>CVE-2021-4104</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-4104">https://nvd.nist.gov/vuln/detail/CVE-2021-4104</a></p> <p>Release Date: 2021-12-14</p> <p>Fix Resolution: uom-parent - 1.0.3-3.module,1.0.3-3.module;uom-se-javadoc - 1.0.4-3.module;parfait-examples - 0.5.4-4.module;log4j-manual - 1.2.17-16;si-units-javadoc - 0.6.5-2.module;unit-api - 1.0-5.module,1.0-5.module;unit-api-javadoc - 1.0-5.module;parfait - 0.5.4-4.module,0.5.4-4.module;log4j-javadoc - 1.2.17-16;uom-systems-javadoc - 0.7-1.module;uom-lib-javadoc - 1.0.1-6.module;uom-systems - 0.7-1.module,0.7-1.module;log4j - 1.2.17-16,1.2.17-16;uom-se - 1.0.4-3.module,1.0.4-3.module;uom-lib - 1.0.1-6.module,1.0.1-6.module;parfait-javadoc - 0.5.4-4.module;pcp-parfait-agent - 0.5.4-4.module;si-units - 0.6.5-2.module,0.6.5-2.module</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2020-9488</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. Fixed in Apache Log4j 2.12.3 and 2.13.1 <p>Publish Date: 2020-04-27 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9488>CVE-2020-9488</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>3.7</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://reload4j.qos.ch/">https://reload4j.qos.ch/</a></p> <p>Release Date: 2020-04-27</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.3</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
log4j-1.2.15.jar: 7 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.15.jar</b></p></summary> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (log4j version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-23305](https://www.mend.io/vulnerability-database/CVE-2022-23305) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.2 | &#10060; | | [CVE-2019-17571](https://www.mend.io/vulnerability-database/CVE-2019-17571) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | log4j-1.2.15.jar | Direct | log4j-manual - 1.2.17-16;log4j-javadoc - 1.2.17-16;log4j - 1.2.17-16,1.2.17-16 | &#10060; | | [CVE-2020-9493](https://www.mend.io/vulnerability-database/CVE-2020-9493) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.1 | &#10060; | | [CVE-2022-23307](https://www.mend.io/vulnerability-database/CVE-2022-23307) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.8 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.1 | &#10060; | | [CVE-2022-23302](https://www.mend.io/vulnerability-database/CVE-2022-23302) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.8 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.1 | &#10060; | | [CVE-2021-4104](https://www.mend.io/vulnerability-database/CVE-2021-4104) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | log4j-1.2.15.jar | Direct | uom-parent - 1.0.3-3.module,1.0.3-3.module;uom-se-javadoc - 1.0.4-3.module;parfait-examples - 0.5.4-4.module;log4j-manual - 1.2.17-16;si-units-javadoc - 0.6.5-2.module;unit-api - 1.0-5.module,1.0-5.module;unit-api-javadoc - 1.0-5.module;parfait - 0.5.4-4.module,0.5.4-4.module;log4j-javadoc - 1.2.17-16;uom-systems-javadoc - 0.7-1.module;uom-lib-javadoc - 1.0.1-6.module;uom-systems - 0.7-1.module,0.7-1.module;log4j - 1.2.17-16,1.2.17-16;uom-se - 1.0.4-3.module,1.0.4-3.module;uom-lib - 1.0.1-6.module,1.0.1-6.module;parfait-javadoc - 0.5.4-4.module;pcp-parfait-agent - 0.5.4-4.module;si-units - 0.6.5-2.module,0.6.5-2.module | &#10060; | | [CVE-2020-9488](https://www.mend.io/vulnerability-database/CVE-2020-9488) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.7 | log4j-1.2.15.jar | Direct | ch.qos.reload4j:reload4j:1.2.18.3 | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-23305</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> By design, the JDBCAppender in Log4j 1.2.x accepts an SQL statement as a configuration parameter where the values to be inserted are converters from PatternLayout. The message converter, %m, is likely to always be included. This allows attackers to manipulate the SQL by entering crafted strings into input fields or headers of an application that are logged allowing unintended SQL queries to be executed. Note this issue only affects Log4j 1.x when specifically configured to use the JDBCAppender, which is not the default. Beginning in version 2.0-beta8, the JDBCAppender was re-introduced with proper support for parameterized SQL queries and further customization over the columns written to in logs. Apache Log4j 1.2 reached end of life in August 2015. Users should upgrade to Log4j 2 as it addresses numerous other issues from the previous versions. <p>Publish Date: 2022-01-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23305>CVE-2022-23305</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://reload4j.qos.ch/">https://reload4j.qos.ch/</a></p> <p>Release Date: 2022-01-18</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.2</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-17571</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17. <p>Publish Date: 2019-12-20 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-17571>CVE-2019-17571</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E">https://lists.apache.org/thread.html/eea03d504b36e8f870e8321d908e1def1addda16adda04327fe7c125%40%3Cdev.logging.apache.org%3E</a></p> <p>Release Date: 2019-12-20</p> <p>Fix Resolution: log4j-manual - 1.2.17-16;log4j-javadoc - 1.2.17-16;log4j - 1.2.17-16,1.2.17-16</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-9493</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution. <p>Publish Date: 2021-06-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9493>CVE-2020-9493</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.openwall.com/lists/oss-security/2021/06/16/1">https://www.openwall.com/lists/oss-security/2021/06/16/1</a></p> <p>Release Date: 2021-06-16</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-23307</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> CVE-2020-9493 identified a deserialization issue that was present in Apache Chainsaw. Prior to Chainsaw V2.0 Chainsaw was a component of Apache Log4j 1.2.x where the same issue exists. <p>Publish Date: 2022-01-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23307>CVE-2022-23307</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>8.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-01-18</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-23302</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> JMSSink in all versions of Log4j 1.x is vulnerable to deserialization of untrusted data when the attacker has write access to the Log4j configuration or if the configuration references an LDAP service the attacker has access to. The attacker can provide a TopicConnectionFactoryBindingName configuration causing JMSSink to perform JNDI requests that result in remote code execution in a similar fashion to CVE-2021-4104. Note this issue only affects Log4j 1.x when specifically configured to use JMSSink, which is not the default. Apache Log4j 1.2 reached end of life in August 2015. Users should upgrade to Log4j 2 as it addresses numerous other issues from the previous versions. <p>Publish Date: 2022-01-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23302>CVE-2022-23302</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>8.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://reload4j.qos.ch/">https://reload4j.qos.ch/</a></p> <p>Release Date: 2022-01-18</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-4104</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> JMSAppender in Log4j 1.2 is vulnerable to deserialization of untrusted data when the attacker has write access to the Log4j configuration. The attacker can provide TopicBindingName and TopicConnectionFactoryBindingName configurations causing JMSAppender to perform JNDI requests that result in remote code execution in a similar fashion to CVE-2021-44228. Note this issue only affects Log4j 1.2 when specifically configured to use JMSAppender, which is not the default. Apache Log4j 1.2 reached end of life in August 2015. Users should upgrade to Log4j 2 as it addresses numerous other issues from the previous versions. <p>Publish Date: 2021-12-14 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-4104>CVE-2021-4104</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-4104">https://nvd.nist.gov/vuln/detail/CVE-2021-4104</a></p> <p>Release Date: 2021-12-14</p> <p>Fix Resolution: uom-parent - 1.0.3-3.module,1.0.3-3.module;uom-se-javadoc - 1.0.4-3.module;parfait-examples - 0.5.4-4.module;log4j-manual - 1.2.17-16;si-units-javadoc - 0.6.5-2.module;unit-api - 1.0-5.module,1.0-5.module;unit-api-javadoc - 1.0-5.module;parfait - 0.5.4-4.module,0.5.4-4.module;log4j-javadoc - 1.2.17-16;uom-systems-javadoc - 0.7-1.module;uom-lib-javadoc - 1.0.1-6.module;uom-systems - 0.7-1.module,0.7-1.module;log4j - 1.2.17-16,1.2.17-16;uom-se - 1.0.4-3.module,1.0.4-3.module;uom-lib - 1.0.1-6.module,1.0.1-6.module;parfait-javadoc - 0.5.4-4.module;pcp-parfait-agent - 0.5.4-4.module;si-units - 0.6.5-2.module,0.6.5-2.module</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2020-9488</summary> ### Vulnerable Library - <b>log4j-1.2.15.jar</b></p> <p></p> <p>Library home page: <a href="http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip">http://archive.apache.org/dist/netbeans/netbeans/11.3/netbeans-11.3-bin.zip</a></p> <p>Path to vulnerable library: /JAVA/mybatis_spring/WebContent/WEB-INF/lib/log4j-1.2.15.jar</p> <p> Dependency Hierarchy: - :x: **log4j-1.2.15.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/niklasschreiber/vulner-2/commit/be13e70bb5a4fc28ce2d12ffc575caa94e313f69">be13e70bb5a4fc28ce2d12ffc575caa94e313f69</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. Fixed in Apache Log4j 2.12.3 and 2.13.1 <p>Publish Date: 2020-04-27 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9488>CVE-2020-9488</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>3.7</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://reload4j.qos.ch/">https://reload4j.qos.ch/</a></p> <p>Release Date: 2020-04-27</p> <p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.3</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_process
jar vulnerabilities highest severity is vulnerable library jar library home page a href path to vulnerable library java mybatis spring webcontent web inf lib jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in version remediation available high jar direct ch qos high jar direct manual javadoc high jar direct ch qos high jar direct ch qos high jar direct ch qos high jar direct uom parent module module uom se javadoc module parfait examples module manual si units javadoc module unit api module module unit api javadoc module parfait module module javadoc uom systems javadoc module uom lib javadoc module uom systems module module uom se module module uom lib module module parfait javadoc module pcp parfait agent module si units module module low jar direct ch qos details cve vulnerable library jar library home page a href path to vulnerable library java mybatis spring webcontent web inf lib jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details by design the jdbcappender in x accepts an sql statement as a configuration parameter where the values to be inserted are converters from patternlayout the message converter m is likely to always be included this allows attackers to manipulate the sql by entering crafted strings into input fields or headers of an application that are logged allowing unintended sql queries to be executed note this issue only affects x when specifically configured to use the jdbcappender which is not the default beginning in version the jdbcappender was re introduced with proper support for parameterized sql queries and further customization over the columns written to in logs apache reached end of life in august users should upgrade to as it addresses numerous other issues from the previous versions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos step up your open source security game with mend cve vulnerable library jar library home page a href path to vulnerable library java mybatis spring webcontent web inf lib jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details included in is a socketserver class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data this affects versions up to up to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution manual javadoc step up your open source security game with mend cve vulnerable library jar library home page a href path to vulnerable library java mybatis spring webcontent web inf lib jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details a deserialization flaw was found in apache chainsaw versions prior to which could lead to malicious code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos step up your open source security game with mend cve vulnerable library jar library home page a href path to vulnerable library java mybatis spring webcontent web inf lib jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details cve identified a deserialization issue that was present in apache chainsaw prior to chainsaw chainsaw was a component of apache x where the same issue exists publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution ch qos step up your open source security game with mend cve vulnerable library jar library home page a href path to vulnerable library java mybatis spring webcontent web inf lib jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details jmssink in all versions of x is vulnerable to deserialization of untrusted data when the attacker has write access to the configuration or if the configuration references an ldap service the attacker has access to the attacker can provide a topicconnectionfactorybindingname configuration causing jmssink to perform jndi requests that result in remote code execution in a similar fashion to cve note this issue only affects x when specifically configured to use jmssink which is not the default apache reached end of life in august users should upgrade to as it addresses numerous other issues from the previous versions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos step up your open source security game with mend cve vulnerable library jar library home page a href path to vulnerable library java mybatis spring webcontent web inf lib jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details jmsappender in is vulnerable to deserialization of untrusted data when the attacker has write access to the configuration the attacker can provide topicbindingname and topicconnectionfactorybindingname configurations causing jmsappender to perform jndi requests that result in remote code execution in a similar fashion to cve note this issue only affects when specifically configured to use jmsappender which is not the default apache reached end of life in august users should upgrade to as it addresses numerous other issues from the previous versions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution uom parent module module uom se javadoc module parfait examples module manual si units javadoc module unit api module module unit api javadoc module parfait module module javadoc uom systems javadoc module uom lib javadoc module uom systems module module uom se module module uom lib module module parfait javadoc module pcp parfait agent module si units module module step up your open source security game with mend cve vulnerable library jar library home page a href path to vulnerable library java mybatis spring webcontent web inf lib jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details improper validation of certificate with host mismatch in apache smtp appender this could allow an smtps connection to be intercepted by a man in the middle attack which could leak any log messages sent through that appender fixed in apache and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos step up your open source security game with mend
0
13,707
16,464,257,540
IssuesEvent
2021-05-22 04:31:04
neuropsychology/NeuroKit
https://api.github.com/repos/neuropsychology/NeuroKit
closed
More PSD methods
inactive 👻 low priority :sleeping: signal processing :chart_with_upwards_trend:
## Non-parametric methods ### Fourier-based methods - [x] Welch (wrapper around Scipy) - [x] multitapers (wrapper around mne) - [x] Lomb-Scargle (in [pyHRV](https://github.com/PGomes92/pyhrv#frequency-domain-parameters) - around scipy) ### Minimum of variance MV (Capon) - [ ] Capon ### Eigenvalues Decomposition ### Maximum Entropy ## Parametric methods ### Autoregressive (AR) parameters - [x] burg (in [spectrum](https://github.com/cokelaer/spectrum)) - [ ] Yule-Walker (in [pyHRV](https://github.com/PGomes92/pyhrv#frequency-domain-parameters) and [spectrum](https://github.com/cokelaer/spectrum)) - [ ] least squares linear prediction *note: spectrum seems to have problems of installation*
1.0
More PSD methods - ## Non-parametric methods ### Fourier-based methods - [x] Welch (wrapper around Scipy) - [x] multitapers (wrapper around mne) - [x] Lomb-Scargle (in [pyHRV](https://github.com/PGomes92/pyhrv#frequency-domain-parameters) - around scipy) ### Minimum of variance MV (Capon) - [ ] Capon ### Eigenvalues Decomposition ### Maximum Entropy ## Parametric methods ### Autoregressive (AR) parameters - [x] burg (in [spectrum](https://github.com/cokelaer/spectrum)) - [ ] Yule-Walker (in [pyHRV](https://github.com/PGomes92/pyhrv#frequency-domain-parameters) and [spectrum](https://github.com/cokelaer/spectrum)) - [ ] least squares linear prediction *note: spectrum seems to have problems of installation*
process
more psd methods non parametric methods fourier based methods welch wrapper around scipy multitapers wrapper around mne lomb scargle in around scipy minimum of variance mv capon capon eigenvalues decomposition maximum entropy parametric methods autoregressive ar parameters burg in yule walker in and least squares linear prediction note spectrum seems to have problems of installation
1
22,560
31,778,294,732
IssuesEvent
2023-09-12 15:39:35
IMAP-Science-Operations-Center/imap_processing
https://api.github.com/repos/IMAP-Science-Operations-Center/imap_processing
opened
[L5] Produce L1C CDF formatted data products for IMAP-Ultra
IMAP-Ultra L1 Requirement: Level 5 Untested Parent Req:SDC Execution of L1 Processing Software
### Summary of the L5 requirement The SDC needs to produce L1C CDF formatted data products for the IMAP-Ultra instrument. The data products for L1B can be found in the IMAP-Ultra algorithm document here: https://lasp.colorado.edu/galaxy/display/IMAP/IMAP+Algorithm+Document+from+Instrument+Teams ### Parent requirement SOC-SDC-L4-29 | SDC Execution of L1 Processing Software | The SDC shall be capable of executing L1 processing software to produce L1 data products within seven (7) days of receipt of all requisite data inputs. | Requirement | The SDC produces L1 data products via Instrument Team supplied algorithms and SDC produced processing code. | SOC-L3-57 | Test | SIT-3: L0 -> L1 processing | SIT-3 -- | -- | -- | -- | -- | -- | -- | -- | -- ### Tasks Create these L1C data products: - [ ]
1.0
[L5] Produce L1C CDF formatted data products for IMAP-Ultra - ### Summary of the L5 requirement The SDC needs to produce L1C CDF formatted data products for the IMAP-Ultra instrument. The data products for L1B can be found in the IMAP-Ultra algorithm document here: https://lasp.colorado.edu/galaxy/display/IMAP/IMAP+Algorithm+Document+from+Instrument+Teams ### Parent requirement SOC-SDC-L4-29 | SDC Execution of L1 Processing Software | The SDC shall be capable of executing L1 processing software to produce L1 data products within seven (7) days of receipt of all requisite data inputs. | Requirement | The SDC produces L1 data products via Instrument Team supplied algorithms and SDC produced processing code. | SOC-L3-57 | Test | SIT-3: L0 -> L1 processing | SIT-3 -- | -- | -- | -- | -- | -- | -- | -- | -- ### Tasks Create these L1C data products: - [ ]
process
produce cdf formatted data products for imap ultra summary of the requirement the sdc needs to produce cdf formatted data products for the imap ultra instrument the data products for can be found in the imap ultra algorithm document here parent requirement soc sdc sdc execution of processing software the sdc shall be capable of executing processing software to produce data products within seven days of receipt of all requisite data inputs requirement the sdc produces data products via instrument team supplied algorithms and sdc produced processing code soc test sit processing sit tasks create these data products
1
249,421
7,961,637,250
IssuesEvent
2018-07-13 11:33:10
Microsoft/pai
https://api.github.com/repos/Microsoft/pai
closed
LauncherAM cannot log file to container dir on Win
enhancement framework-launcher low priority review needed
LauncherAM cannot log file to container dir on Win. This is due to log4j 1.x does not support cross platform env reference. ![image](https://user-images.githubusercontent.com/32826762/34508414-feedc8f6-f079-11e7-9ac7-39547811c8eb.png)
1.0
LauncherAM cannot log file to container dir on Win - LauncherAM cannot log file to container dir on Win. This is due to log4j 1.x does not support cross platform env reference. ![image](https://user-images.githubusercontent.com/32826762/34508414-feedc8f6-f079-11e7-9ac7-39547811c8eb.png)
non_process
launcheram cannot log file to container dir on win launcheram cannot log file to container dir on win this is due to x does not support cross platform env reference
0
6,621
9,725,373,566
IssuesEvent
2019-05-30 08:31:23
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Missing parameter in batch processing of 'Select by location' and missing scaling of fields
Feature Request Processing
Author Name: **Mie Winstrup** (Mie Winstrup) Original Redmine Issue: [19767](https://issues.qgis.org/issues/19767) Redmine category:processing/core --- When using the 'Select by Location' tool I can choose only to use the selected features: ![SelectByLocation_batchProcessing.png](https://issues.qgis.org/attachments/download/13241/SelectByLocation_batchProcessing.png) But when I choose to run this tool as a batch process this option is not there - it is also not possibel to drag in the field with 'geometric predicate': ![SelectByLocation_batchProcessing.png](https://issues.qgis.org/attachments/download/13241/SelectByLocation_batchProcessing.png) It would be nice if the same options when running as a batch process and that the scaling of the field works. --- - [SelectByLocation_batchProcessing.png](https://issues.qgis.org/attachments/download/13241/SelectByLocation_batchProcessing.png) (Mie Winstrup) - [SelectByLocation.png](https://issues.qgis.org/attachments/download/13242/SelectByLocation.png) (Mie Winstrup)
1.0
Missing parameter in batch processing of 'Select by location' and missing scaling of fields - Author Name: **Mie Winstrup** (Mie Winstrup) Original Redmine Issue: [19767](https://issues.qgis.org/issues/19767) Redmine category:processing/core --- When using the 'Select by Location' tool I can choose only to use the selected features: ![SelectByLocation_batchProcessing.png](https://issues.qgis.org/attachments/download/13241/SelectByLocation_batchProcessing.png) But when I choose to run this tool as a batch process this option is not there - it is also not possibel to drag in the field with 'geometric predicate': ![SelectByLocation_batchProcessing.png](https://issues.qgis.org/attachments/download/13241/SelectByLocation_batchProcessing.png) It would be nice if the same options when running as a batch process and that the scaling of the field works. --- - [SelectByLocation_batchProcessing.png](https://issues.qgis.org/attachments/download/13241/SelectByLocation_batchProcessing.png) (Mie Winstrup) - [SelectByLocation.png](https://issues.qgis.org/attachments/download/13242/SelectByLocation.png) (Mie Winstrup)
process
missing parameter in batch processing of select by location and missing scaling of fields author name mie winstrup mie winstrup original redmine issue redmine category processing core when using the select by location tool i can choose only to use the selected features but when i choose to run this tool as a batch process this option is not there it is also not possibel to drag in the field with geometric predicate it would be nice if the same options when running as a batch process and that the scaling of the field works mie winstrup mie winstrup
1
168,634
26,675,939,780
IssuesEvent
2023-01-26 14:16:17
open-sauced/insights
https://api.github.com/repos/open-sauced/insights
opened
Feature: Onboarding needs interests instead of repos
💡 feature 🖍 needs design
### Type of feature 🍕 Feature ### Current behavior The current onboarding asks for repos and an org to be selected. <img width="525" alt="Screen Shot 2023-01-26 at 6 10 32 AM" src="https://user-images.githubusercontent.com/5713670/214857680-35054fbc-688c-4658-bcb2-1a20d90f9c44.png"> ### Suggested solution We should ask for interests from a predefined list. A user should be able to skip this step. ### Additional context _No response_ ### Code of Conduct - [X] I agree to follow this project's Code of Conduct ### Contributing Docs - [ ] I agree to follow this project's Contribution Docs
1.0
Feature: Onboarding needs interests instead of repos - ### Type of feature 🍕 Feature ### Current behavior The current onboarding asks for repos and an org to be selected. <img width="525" alt="Screen Shot 2023-01-26 at 6 10 32 AM" src="https://user-images.githubusercontent.com/5713670/214857680-35054fbc-688c-4658-bcb2-1a20d90f9c44.png"> ### Suggested solution We should ask for interests from a predefined list. A user should be able to skip this step. ### Additional context _No response_ ### Code of Conduct - [X] I agree to follow this project's Code of Conduct ### Contributing Docs - [ ] I agree to follow this project's Contribution Docs
non_process
feature onboarding needs interests instead of repos type of feature 🍕 feature current behavior the current onboarding asks for repos and an org to be selected img width alt screen shot at am src suggested solution we should ask for interests from a predefined list a user should be able to skip this step additional context no response code of conduct i agree to follow this project s code of conduct contributing docs i agree to follow this project s contribution docs
0
824,258
31,146,702,816
IssuesEvent
2023-08-16 07:04:15
red-hat-storage/ocs-ci
https://api.github.com/repos/red-hat-storage/ocs-ci
closed
Add proper assertion in test_bucket_replication tests
High Priority MCG Squad/Red
In `test_bucket_replication.py` , `compare_bucket_object_list` call needs to be asserted properly.
1.0
Add proper assertion in test_bucket_replication tests - In `test_bucket_replication.py` , `compare_bucket_object_list` call needs to be asserted properly.
non_process
add proper assertion in test bucket replication tests in test bucket replication py compare bucket object list call needs to be asserted properly
0
12,003
14,738,156,774
IssuesEvent
2021-01-07 03:55:37
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
001 NCSM - Unable to change cycle on Atlanta NCSM account
anc-ops anc-process anp-1 ant-bug ant-parent/primary ant-support
In GitLab by @kdjstudios on May 9, 2018, 09:38 **Submitted by:** "Jesus Corchado" <jesus.corchado@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-09-93738/conversation **Server:** Internal **Client/Site:** NCSM - 53 **Account:** N0020 **Issue:** I am having trouble with one account in Atlanta. I tried changing the cycle several times to the NCSM 1st cycle accounts but every time I click save, it does not make the change as stays as it was before. Can you please help me out on this one? It’s for account 053-N0020.
1.0
001 NCSM - Unable to change cycle on Atlanta NCSM account - In GitLab by @kdjstudios on May 9, 2018, 09:38 **Submitted by:** "Jesus Corchado" <jesus.corchado@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-09-93738/conversation **Server:** Internal **Client/Site:** NCSM - 53 **Account:** N0020 **Issue:** I am having trouble with one account in Atlanta. I tried changing the cycle several times to the NCSM 1st cycle accounts but every time I click save, it does not make the change as stays as it was before. Can you please help me out on this one? It’s for account 053-N0020.
process
ncsm unable to change cycle on atlanta ncsm account in gitlab by kdjstudios on may submitted by jesus corchado helpdesk server internal client site ncsm account issue i am having trouble with one account in atlanta i tried changing the cycle several times to the ncsm cycle accounts but every time i click save it does not make the change as stays as it was before can you please help me out on this one it’s for account
1
499,252
14,443,648,266
IssuesEvent
2020-12-07 19:58:56
Energy-Innovation/eps-us
https://api.github.com/repos/Energy-Innovation/eps-us
closed
ELF for hydrogen should use the district heat and hydrogen sector subscript, not industry
high priority
Hydrogen supply is likely to be very flexible. To be able to model this correctly without updates to our demand response methodology (mentioned in the previous issue), we should use the district heat and hydrogen subscript, not the industry subscript to which it is currently mapped (we should map it this way anyway, even without the DR update). ELF has been updated to account for this but the structure needs a slight fix. This is needed for the US 3.1 model to work correctly. For TX, we can use the industrial values for district heat and hydrogen to avoid changing their data.
1.0
ELF for hydrogen should use the district heat and hydrogen sector subscript, not industry - Hydrogen supply is likely to be very flexible. To be able to model this correctly without updates to our demand response methodology (mentioned in the previous issue), we should use the district heat and hydrogen subscript, not the industry subscript to which it is currently mapped (we should map it this way anyway, even without the DR update). ELF has been updated to account for this but the structure needs a slight fix. This is needed for the US 3.1 model to work correctly. For TX, we can use the industrial values for district heat and hydrogen to avoid changing their data.
non_process
elf for hydrogen should use the district heat and hydrogen sector subscript not industry hydrogen supply is likely to be very flexible to be able to model this correctly without updates to our demand response methodology mentioned in the previous issue we should use the district heat and hydrogen subscript not the industry subscript to which it is currently mapped we should map it this way anyway even without the dr update elf has been updated to account for this but the structure needs a slight fix this is needed for the us model to work correctly for tx we can use the industrial values for district heat and hydrogen to avoid changing their data
0
15,547
19,703,502,056
IssuesEvent
2022-01-12 19:07:55
googleapis/java-monitoring-dashboards
https://api.github.com/repos/googleapis/java-monitoring-dashboards
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'monitoring-dashboards' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'monitoring-dashboards' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname monitoring dashboards invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
139,612
31,714,700,459
IssuesEvent
2023-09-09 18:12:22
Brandman88/IDPs.hydro
https://api.github.com/repos/Brandman88/IDPs.hydro
opened
remove email
Priority Subclass: 1 Priority Class: Medium Code Base Impact: Trivial Expected Time Requirement: Minutes
name: Remove email from global code about: remove my email so anyone using code does not spam my email. Relation to: run.sh,Kill.sh
1.0
remove email - name: Remove email from global code about: remove my email so anyone using code does not spam my email. Relation to: run.sh,Kill.sh
non_process
remove email name remove email from global code about remove my email so anyone using code does not spam my email relation to run sh kill sh
0
4,543
7,374,850,968
IssuesEvent
2018-03-13 21:44:07
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
You can deploy to only five resource groups in a single deployment ?
azure-resource-manager cxp in-process product-question triaged
In the note above there is a mention of being able to..... "You can deploy to only five resource groups in a single deployment." In this scenario, does this include the one you specify as part of the new-azurermresourcegroupdeployment command ? If so, can this be clarified ? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 5a327806-354a-df02-bfef-91a912750d1a * Version Independent ID: 4aae591f-bacf-d40b-b939-a11ca7c8d9db * Content: [Deploy Azure resources to multiple subscription and resource groups | Microsoft Docs](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-cross-resource-group-deployment) * Content Source: [articles/azure-resource-manager/resource-manager-cross-resource-group-deployment.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-resource-manager/resource-manager-cross-resource-group-deployment.md) * Service: **azure-resource-manager** * GitHub Login: @tfitzmac * Microsoft Alias: **tomfitz**
1.0
You can deploy to only five resource groups in a single deployment ? - In the note above there is a mention of being able to..... "You can deploy to only five resource groups in a single deployment." In this scenario, does this include the one you specify as part of the new-azurermresourcegroupdeployment command ? If so, can this be clarified ? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 5a327806-354a-df02-bfef-91a912750d1a * Version Independent ID: 4aae591f-bacf-d40b-b939-a11ca7c8d9db * Content: [Deploy Azure resources to multiple subscription and resource groups | Microsoft Docs](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-cross-resource-group-deployment) * Content Source: [articles/azure-resource-manager/resource-manager-cross-resource-group-deployment.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-resource-manager/resource-manager-cross-resource-group-deployment.md) * Service: **azure-resource-manager** * GitHub Login: @tfitzmac * Microsoft Alias: **tomfitz**
process
you can deploy to only five resource groups in a single deployment in the note above there is a mention of being able to you can deploy to only five resource groups in a single deployment in this scenario does this include the one you specify as part of the new azurermresourcegroupdeployment command if so can this be clarified document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id bfef version independent id bacf content content source service azure resource manager github login tfitzmac microsoft alias tomfitz
1
218,462
16,761,190,022
IssuesEvent
2021-06-13 20:27:55
AlfredoCarias/Examen-Final-Analisis-de-sistemas
https://api.github.com/repos/AlfredoCarias/Examen-Final-Analisis-de-sistemas
closed
Realizacion del documento inicial del sistema
documentation
Se realizara el documento del sistema en donde tendra la informacion del funcionamiento del mismo.
1.0
Realizacion del documento inicial del sistema - Se realizara el documento del sistema en donde tendra la informacion del funcionamiento del mismo.
non_process
realizacion del documento inicial del sistema se realizara el documento del sistema en donde tendra la informacion del funcionamiento del mismo
0
351
2,793,530,877
IssuesEvent
2015-05-11 11:42:59
ecodistrict/IDSSDashboard
https://api.github.com/repos/ecodistrict/IDSSDashboard
closed
Modifying KPIs
enhancement form feedback 09102014 process step: analyse problem
In the window for modifying each KPI it could help to number the details according the sequence of filling these in. Also it could help to put a border around each option, not very clear at the moment what browse belongs to.
1.0
Modifying KPIs - In the window for modifying each KPI it could help to number the details according the sequence of filling these in. Also it could help to put a border around each option, not very clear at the moment what browse belongs to.
process
modifying kpis in the window for modifying each kpi it could help to number the details according the sequence of filling these in also it could help to put a border around each option not very clear at the moment what browse belongs to
1
21,955
30,453,405,706
IssuesEvent
2023-07-16 15:28:44
elastic/beats
https://api.github.com/repos/elastic/beats
closed
[libbeat] add_nomad_metdata - Typo in indexer name for 'allocation_uid'
bug libbeat :Processors Team:Integrations Stalled
The name of the indexer should be `allocation_uuid`. That is how it is shown in the documentation at https://www.elastic.co/guide/en/beats/heartbeat/7.16/add-nomad-metadata.html#_fields_2. The error that you get when trying to use the documented config is: ```json { "log.level": "warn", "@timestamp": "2022-01-07T14:52:37.228Z", "log.origin": { "file.name": "add_nomad_metadata/indexers.go", "file.line": 55 }, "message": "Unable to find indexing plugin allocation_uuid", "service.name": "filebeat", "ecs.version": "1.6.0" } ``` The affected code is: https://github.com/elastic/beats/blob/a1617c7c4eef6d12542ad117be8c68795d61ee40/x-pack/libbeat/processors/add_nomad_metadata/indexers.go#L18
1.0
[libbeat] add_nomad_metdata - Typo in indexer name for 'allocation_uid' - The name of the indexer should be `allocation_uuid`. That is how it is shown in the documentation at https://www.elastic.co/guide/en/beats/heartbeat/7.16/add-nomad-metadata.html#_fields_2. The error that you get when trying to use the documented config is: ```json { "log.level": "warn", "@timestamp": "2022-01-07T14:52:37.228Z", "log.origin": { "file.name": "add_nomad_metadata/indexers.go", "file.line": 55 }, "message": "Unable to find indexing plugin allocation_uuid", "service.name": "filebeat", "ecs.version": "1.6.0" } ``` The affected code is: https://github.com/elastic/beats/blob/a1617c7c4eef6d12542ad117be8c68795d61ee40/x-pack/libbeat/processors/add_nomad_metadata/indexers.go#L18
process
add nomad metdata typo in indexer name for allocation uid the name of the indexer should be allocation uuid that is how it is shown in the documentation at the error that you get when trying to use the documented config is json log level warn timestamp log origin file name add nomad metadata indexers go file line message unable to find indexing plugin allocation uuid service name filebeat ecs version the affected code is
1
21,557
29,892,775,878
IssuesEvent
2023-06-21 00:20:33
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] Uh, aren't we supposed to be using `:effective-type` everywhere, not `:base-type`?
.metabase-lib .Team/QueryProcessor :hammer_and_wrench:
If we have a column that is technically a `:type/Integer` column but we're coercing it to `:type/Timestamp`, we need to be treating it as `:type/Timestamp` everywhere. We need to do a mass search-and-replace in MLv2 to use `:effective-type` everywhere rather than `:base-type`. For conversion purposes keep in mind that `:field` clauses in legacy MBQL expect `:base-type` rather than `:effective-type`.
1.0
[MLv2] Uh, aren't we supposed to be using `:effective-type` everywhere, not `:base-type`? - If we have a column that is technically a `:type/Integer` column but we're coercing it to `:type/Timestamp`, we need to be treating it as `:type/Timestamp` everywhere. We need to do a mass search-and-replace in MLv2 to use `:effective-type` everywhere rather than `:base-type`. For conversion purposes keep in mind that `:field` clauses in legacy MBQL expect `:base-type` rather than `:effective-type`.
process
uh aren t we supposed to be using effective type everywhere not base type if we have a column that is technically a type integer column but we re coercing it to type timestamp we need to be treating it as type timestamp everywhere we need to do a mass search and replace in to use effective type everywhere rather than base type for conversion purposes keep in mind that field clauses in legacy mbql expect base type rather than effective type
1
35,946
9,691,016,116
IssuesEvent
2019-05-24 10:02:11
Lundalogik/lip
https://api.github.com/repos/Lundalogik/lip
opened
Support for Chromium
bug package builder
The package builder does not work with Chromium. This needs to be fixed since the desktop client will move completely to chromium within a few releases. Tricky situation: We want the package builder to work also for customers not using Chromium yet. Or can we say that once we fix support for Chromium we will release a new major version and customers using older DC will not be able to use the package builder?
1.0
Support for Chromium - The package builder does not work with Chromium. This needs to be fixed since the desktop client will move completely to chromium within a few releases. Tricky situation: We want the package builder to work also for customers not using Chromium yet. Or can we say that once we fix support for Chromium we will release a new major version and customers using older DC will not be able to use the package builder?
non_process
support for chromium the package builder does not work with chromium this needs to be fixed since the desktop client will move completely to chromium within a few releases tricky situation we want the package builder to work also for customers not using chromium yet or can we say that once we fix support for chromium we will release a new major version and customers using older dc will not be able to use the package builder
0
15,774
11,702,015,217
IssuesEvent
2020-03-06 21:01:01
reapit/foundations
https://api.github.com/repos/reapit/foundations
closed
Identify the correct datastore for capturing and presenting interaction stats
infrastructure platform-team
Our logging stream is not persistent or performent enough to serve as the source of truth for traffic events. We need to select a dedicated datastore that will best service this purpose.
1.0
Identify the correct datastore for capturing and presenting interaction stats - Our logging stream is not persistent or performent enough to serve as the source of truth for traffic events. We need to select a dedicated datastore that will best service this purpose.
non_process
identify the correct datastore for capturing and presenting interaction stats our logging stream is not persistent or performent enough to serve as the source of truth for traffic events we need to select a dedicated datastore that will best service this purpose
0
74,696
9,105,290,620
IssuesEvent
2019-02-20 20:23:34
aspnet/AspNetCore
https://api.github.com/repos/aspnet/AspNetCore
closed
InputFormatter, OutputFormatter, and IApiRequestMetadata
1 - Ready area-mvc feature-API-Explorer needs design
I've been having some difficulty with these APIs when trying to use some custom media types. Here's a recap: - **InputFormatter** - **CanRead** `if (parsedContentType.IsSubsetOf(supportedMediaType))` (https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Formatters/InputFormatter.cs#L78) - **GetSupportedContentTypes** `if (parsedMediaType.IsSubsetOf(parsedContentType))` (https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Formatters/InputFormatter.cs#L153) - **OutputFormatter** - **GetSupportedContentTypes** `if (parsedMediaType.IsSubsetOf(parsedContentType))` (https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Formatters/OutputFormatter.cs#L69) - **CanWriteResult** `if (supportedMediaType.IsSubsetOf(parsedContentType))` (https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Formatters/OutputFormatter.cs#L122) Basically, the `InputFormatter` is liberal in what in can read because of `CanRead`, but when the time comes to use Api Explorer, the custom media type from the `Consumes` attribute does not show up because of `GetSupportedContentTypes`. Meanwhile, the `OutputFormatter` is very conservative and requires developers to either add every possible permutation of a custom media type to `SupportedMediaTypes` in order for it to allow it, or re-implement each formatter (XML, JSON) they wish to use with the custom media type (of course by inheriting and overriding the relevant methods.) Since the `InputFormatter` will say it `CanRead` _subsets_, I propose that `GetSupportedContentTypes` be modified to include the supplied content type in the `results` list when the supplied content type is a subset of a supported media type. I'm not sure about the `OutputFormatter`, I feel like there may be a good reason I'm not seeing for its current behavior. But for reference, to get around it, I've had to create a custom `OutputFormatter` that extends `JsonOutputFormatter`, which needs an `ArrayPool<char>` and `IOptions<MvcJsonOptions>`, and because formatters are registered on `MvcOptions`, I've also had to declare an `IConfigureOptions<MvcOptions>` class in order to get it all registered... kind of clunky.
1.0
InputFormatter, OutputFormatter, and IApiRequestMetadata - I've been having some difficulty with these APIs when trying to use some custom media types. Here's a recap: - **InputFormatter** - **CanRead** `if (parsedContentType.IsSubsetOf(supportedMediaType))` (https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Formatters/InputFormatter.cs#L78) - **GetSupportedContentTypes** `if (parsedMediaType.IsSubsetOf(parsedContentType))` (https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Formatters/InputFormatter.cs#L153) - **OutputFormatter** - **GetSupportedContentTypes** `if (parsedMediaType.IsSubsetOf(parsedContentType))` (https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Formatters/OutputFormatter.cs#L69) - **CanWriteResult** `if (supportedMediaType.IsSubsetOf(parsedContentType))` (https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Formatters/OutputFormatter.cs#L122) Basically, the `InputFormatter` is liberal in what in can read because of `CanRead`, but when the time comes to use Api Explorer, the custom media type from the `Consumes` attribute does not show up because of `GetSupportedContentTypes`. Meanwhile, the `OutputFormatter` is very conservative and requires developers to either add every possible permutation of a custom media type to `SupportedMediaTypes` in order for it to allow it, or re-implement each formatter (XML, JSON) they wish to use with the custom media type (of course by inheriting and overriding the relevant methods.) Since the `InputFormatter` will say it `CanRead` _subsets_, I propose that `GetSupportedContentTypes` be modified to include the supplied content type in the `results` list when the supplied content type is a subset of a supported media type. I'm not sure about the `OutputFormatter`, I feel like there may be a good reason I'm not seeing for its current behavior. But for reference, to get around it, I've had to create a custom `OutputFormatter` that extends `JsonOutputFormatter`, which needs an `ArrayPool<char>` and `IOptions<MvcJsonOptions>`, and because formatters are registered on `MvcOptions`, I've also had to declare an `IConfigureOptions<MvcOptions>` class in order to get it all registered... kind of clunky.
non_process
inputformatter outputformatter and iapirequestmetadata i ve been having some difficulty with these apis when trying to use some custom media types here s a recap inputformatter canread if parsedcontenttype issubsetof supportedmediatype getsupportedcontenttypes if parsedmediatype issubsetof parsedcontenttype outputformatter getsupportedcontenttypes if parsedmediatype issubsetof parsedcontenttype canwriteresult if supportedmediatype issubsetof parsedcontenttype basically the inputformatter is liberal in what in can read because of canread but when the time comes to use api explorer the custom media type from the consumes attribute does not show up because of getsupportedcontenttypes meanwhile the outputformatter is very conservative and requires developers to either add every possible permutation of a custom media type to supportedmediatypes in order for it to allow it or re implement each formatter xml json they wish to use with the custom media type of course by inheriting and overriding the relevant methods since the inputformatter will say it canread subsets i propose that getsupportedcontenttypes be modified to include the supplied content type in the results list when the supplied content type is a subset of a supported media type i m not sure about the outputformatter i feel like there may be a good reason i m not seeing for its current behavior but for reference to get around it i ve had to create a custom outputformatter that extends jsonoutputformatter which needs an arraypool and ioptions and because formatters are registered on mvcoptions i ve also had to declare an iconfigureoptions class in order to get it all registered kind of clunky
0