Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17,714 | 23,610,025,980 | IssuesEvent | 2022-08-24 11:37:10 | bjorkgard/public-secretary | https://api.github.com/repos/bjorkgard/public-secretary | closed | Exportera adresslista | enhancement Publisher in process | En användare med rättighet skall kunna exportera en adresslista
- Komplett (default)
- Enbart verksamma
- Enbart kontaktpersoner
- Med nödkontakter
Sorteringar
- Bokstavsordning
- Gruppordning (default)
Format (default är församlingens inställning):
- PDF
- Excel | 1.0 | Exportera adresslista - En användare med rättighet skall kunna exportera en adresslista
- Komplett (default)
- Enbart verksamma
- Enbart kontaktpersoner
- Med nödkontakter
Sorteringar
- Bokstavsordning
- Gruppordning (default)
Format (default är församlingens inställning):
- PDF
- Excel | process | exportera adresslista en användare med rättighet skall kunna exportera en adresslista komplett default enbart verksamma enbart kontaktpersoner med nödkontakter sorteringar bokstavsordning gruppordning default format default är församlingens inställning pdf excel | 1 |
11,527 | 9,222,883,795 | IssuesEvent | 2019-03-12 00:52:30 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | All libcurl-related tests failing on Ubuntu.1804.Arm64.Open-arm64-Release | area-Infrastructure | System.Net.Http.Native can't be loaded, I'm assuming because it's unable to load libcurl:
```
Unhandled Exception of Type System.TypeInitializationException
Message :
System.TypeInitializationException : The type initializer for 'System.Net.Http.CurlHandler' threw an exception.
---- System.TypeInitializationException : The type initializer for 'Http' threw an exception.
-------- System.TypeInitializationException : The type initializer for 'HttpInitializer' threw an exception.
------------ System.DllNotFoundException : Unable to load shared library 'System.Net.Http.Native' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libSystem.Net.Http.Native: cannot open shared object file: No such file or directory
Stack Trace :
at System.Net.Http.CurlHandler..ctor() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/System.Net.Http/src/System/Net/Http/CurlHandler/CurlHandler.cs:line 188
at System.Net.Http.HttpClientHandler..ctor(Boolean useSocketsHttpHandler) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/System.Net.Http/src/System/Net/Http/HttpClientHandler.Unix.cs:line 37
----- Inner Stack Trace -----
at Interop.Http.GetSupportedFeatures()
at System.Net.Http.CurlHandler..cctor() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/System.Net.Http/src/System/Net/Http/CurlHandler/CurlHandler.cs:line 164
----- Inner Stack Trace -----
at Interop.HttpInitializer.Initialize() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/Common/src/Interop/Unix/System.Net.Http.Native/Interop.Initialization.cs:line 47
at Interop.Http..cctor() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/Common/src/Interop/Unix/System.Net.Http.Native/Interop.Initialization.cs:line 18
----- Inner Stack Trace -----
at Interop.Http.GetSslVersionDescription()
at Interop.HttpInitializer..cctor() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/Common/src/Interop/Unix/System.Net.Http.Native/Interop.Initialization.cs:line 27
```
All PRs are failing as a result. | 1.0 | All libcurl-related tests failing on Ubuntu.1804.Arm64.Open-arm64-Release - System.Net.Http.Native can't be loaded, I'm assuming because it's unable to load libcurl:
```
Unhandled Exception of Type System.TypeInitializationException
Message :
System.TypeInitializationException : The type initializer for 'System.Net.Http.CurlHandler' threw an exception.
---- System.TypeInitializationException : The type initializer for 'Http' threw an exception.
-------- System.TypeInitializationException : The type initializer for 'HttpInitializer' threw an exception.
------------ System.DllNotFoundException : Unable to load shared library 'System.Net.Http.Native' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libSystem.Net.Http.Native: cannot open shared object file: No such file or directory
Stack Trace :
at System.Net.Http.CurlHandler..ctor() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/System.Net.Http/src/System/Net/Http/CurlHandler/CurlHandler.cs:line 188
at System.Net.Http.HttpClientHandler..ctor(Boolean useSocketsHttpHandler) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/System.Net.Http/src/System/Net/Http/HttpClientHandler.Unix.cs:line 37
----- Inner Stack Trace -----
at Interop.Http.GetSupportedFeatures()
at System.Net.Http.CurlHandler..cctor() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/System.Net.Http/src/System/Net/Http/CurlHandler/CurlHandler.cs:line 164
----- Inner Stack Trace -----
at Interop.HttpInitializer.Initialize() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/Common/src/Interop/Unix/System.Net.Http.Native/Interop.Initialization.cs:line 47
at Interop.Http..cctor() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/Common/src/Interop/Unix/System.Net.Http.Native/Interop.Initialization.cs:line 18
----- Inner Stack Trace -----
at Interop.Http.GetSslVersionDescription()
at Interop.HttpInitializer..cctor() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_arm64+TestOuter_false_prtest/src/Common/src/Interop/Unix/System.Net.Http.Native/Interop.Initialization.cs:line 27
```
All PRs are failing as a result. | non_process | all libcurl related tests failing on ubuntu open release system net http native can t be loaded i m assuming because it s unable to load libcurl unhandled exception of type system typeinitializationexception message system typeinitializationexception the type initializer for system net http curlhandler threw an exception system typeinitializationexception the type initializer for http threw an exception system typeinitializationexception the type initializer for httpinitializer threw an exception system dllnotfoundexception unable to load shared library system net http native or one of its dependencies in order to help diagnose loading problems consider setting the ld debug environment variable libsystem net http native cannot open shared object file no such file or directory stack trace at system net http curlhandler ctor in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net http src system net http curlhandler curlhandler cs line at system net http httpclienthandler ctor boolean usesocketshttphandler in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net http src system net http httpclienthandler unix cs line inner stack trace at interop http getsupportedfeatures at system net http curlhandler cctor in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net http src system net http curlhandler curlhandler cs line inner stack trace at interop httpinitializer initialize in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src common src interop unix system net http native interop initialization cs line at interop http cctor in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src common src interop unix system net http native interop initialization cs line inner stack trace at interop http getsslversiondescription at interop httpinitializer cctor in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src common src interop unix system net http native interop initialization cs line all prs are failing as a result | 0 |
51,811 | 21,884,317,472 | IssuesEvent | 2022-05-19 17:00:58 | emergenzeHack/ukrainehelp.emergenzehack.info_segnalazioni | https://api.github.com/repos/emergenzeHack/ukrainehelp.emergenzehack.info_segnalazioni | opened | Sportello dedicato all'emergenza ucraina | Services Legal hospitality healthcare | <pre><yamldata>
servicetypes:
materialGoods: false
hospitality: true
transport: false
healthcare: true
Legal: true
translation: false
job: false
psychologicalSupport: false
Children: false
disability: false
women: false
education: false
offerFromWho: Comune di San Donato Milanese
title: Sportello dedicato all'emergenza ucraina
recipients: ''
description: 'Lo sportello stranieri presso il Comune di San Donato Milanese fornisce
informazioni ed assistenza in merito alle pratiche dei cittadini stranieri (permesso
di soggiorno, cittadinanza, ricongiungimento ecc). Sul sito istituzionale tutte
le info:
https://www.comune.sandonatomilanese.mi.it/stranieri
Inoltre è stata dedicata una pagina specifica alla emergenza ucraina divisa in due
sezioni: 1) per chi arriva in Italia; 2) per chi vuole aiutare.'
url: https://www.comune.sandonatomilanese.mi.it/emergenza-ucraina
address:
mode: manual
address:
address1: ''
address2: ''
city: San Donato Milanese
state: Italia
country: ''
zip: '20097'
iConfirmToHaveReadAndAcceptedInformativeToThreatPersonalData: true
label: services
submit: true
</yamldata></pre> | 1.0 | Sportello dedicato all'emergenza ucraina - <pre><yamldata>
servicetypes:
materialGoods: false
hospitality: true
transport: false
healthcare: true
Legal: true
translation: false
job: false
psychologicalSupport: false
Children: false
disability: false
women: false
education: false
offerFromWho: Comune di San Donato Milanese
title: Sportello dedicato all'emergenza ucraina
recipients: ''
description: 'Lo sportello stranieri presso il Comune di San Donato Milanese fornisce
informazioni ed assistenza in merito alle pratiche dei cittadini stranieri (permesso
di soggiorno, cittadinanza, ricongiungimento ecc). Sul sito istituzionale tutte
le info:
https://www.comune.sandonatomilanese.mi.it/stranieri
Inoltre è stata dedicata una pagina specifica alla emergenza ucraina divisa in due
sezioni: 1) per chi arriva in Italia; 2) per chi vuole aiutare.'
url: https://www.comune.sandonatomilanese.mi.it/emergenza-ucraina
address:
mode: manual
address:
address1: ''
address2: ''
city: San Donato Milanese
state: Italia
country: ''
zip: '20097'
iConfirmToHaveReadAndAcceptedInformativeToThreatPersonalData: true
label: services
submit: true
</yamldata></pre> | non_process | sportello dedicato all emergenza ucraina servicetypes materialgoods false hospitality true transport false healthcare true legal true translation false job false psychologicalsupport false children false disability false women false education false offerfromwho comune di san donato milanese title sportello dedicato all emergenza ucraina recipients description lo sportello stranieri presso il comune di san donato milanese fornisce informazioni ed assistenza in merito alle pratiche dei cittadini stranieri permesso di soggiorno cittadinanza ricongiungimento ecc sul sito istituzionale tutte le info inoltre è stata dedicata una pagina specifica alla emergenza ucraina divisa in due sezioni per chi arriva in italia per chi vuole aiutare url address mode manual address city san donato milanese state italia country zip iconfirmtohavereadandacceptedinformativetothreatpersonaldata true label services submit true | 0 |
19,352 | 3,193,613,674 | IssuesEvent | 2015-09-30 07:00:25 | netty/netty | https://api.github.com/repos/netty/netty | closed | Adding DefaultHttpHeaders to itself creates infinite loop | defect | Example:
public void test() {
HttpHeaders headers = new DefaultHttpHeaders();
headers.add("foo", "bar");
headers.add(headers);
// This will never end
headers.forEach(entry -> {});
}
| 1.0 | Adding DefaultHttpHeaders to itself creates infinite loop - Example:
public void test() {
HttpHeaders headers = new DefaultHttpHeaders();
headers.add("foo", "bar");
headers.add(headers);
// This will never end
headers.forEach(entry -> {});
}
| non_process | adding defaulthttpheaders to itself creates infinite loop example public void test httpheaders headers new defaulthttpheaders headers add foo bar headers add headers this will never end headers foreach entry | 0 |
542,150 | 15,856,053,044 | IssuesEvent | 2021-04-08 01:24:22 | sonia-auv/octopus-telemetry | https://api.github.com/repos/sonia-auv/octopus-telemetry | closed | Side bar | Priority: High Type: Feature | **Warning :** Before creating an issue or task, make sure that it does not already exists in the [issue tracker](../). Thank you.
## Context
Add side bar to drop module
## Changes
<!-- Give a brief description of the components that need to change and how -->
## Comments
<!-- Add further comments if needed -->
| 1.0 | Side bar - **Warning :** Before creating an issue or task, make sure that it does not already exists in the [issue tracker](../). Thank you.
## Context
Add side bar to drop module
## Changes
<!-- Give a brief description of the components that need to change and how -->
## Comments
<!-- Add further comments if needed -->
| non_process | side bar warning before creating an issue or task make sure that it does not already exists in the thank you context add side bar to drop module changes comments | 0 |
2,111 | 2,603,976,485 | IssuesEvent | 2015-02-24 19:01:37 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳龟头上肉芽 | auto-migrated Priority-Medium Type-Defect | ```
沈阳龟头上肉芽〓沈陽軍區政治部醫院性病〓TEL:024-31023308��
�成立于1946年,68年專注于性傳播疾病的研究和治療。位于沈�
��市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷史�
��久、設備精良、技術權威、專家云集,是預防、保健、醫療
、科研康復為一體的綜合性醫院。是國家首批公立甲等部隊��
�院、全國首批醫療規范定點單位,是第四軍醫大學、東南大�
��等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤
部衛生部評為衛生工作先進單位,先后兩次榮立集體二等功��
�
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:21 | 1.0 | 沈阳龟头上肉芽 - ```
沈阳龟头上肉芽〓沈陽軍區政治部醫院性病〓TEL:024-31023308��
�成立于1946年,68年專注于性傳播疾病的研究和治療。位于沈�
��市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷史�
��久、設備精良、技術權威、專家云集,是預防、保健、醫療
、科研康復為一體的綜合性醫院。是國家首批公立甲等部隊��
�院、全國首批醫療規范定點單位,是第四軍醫大學、東南大�
��等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤
部衛生部評為衛生工作先進單位,先后兩次榮立集體二等功��
�
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:21 | non_process | 沈阳龟头上肉芽 沈阳龟头上肉芽〓沈陽軍區政治部醫院性病〓tel: �� � , 。位于沈� �� 。是一所與新中國同建立共輝煌的歷史� ��久、設備精良、技術權威、專家云集,是預防、保健、醫療 、科研康復為一體的綜合性醫院。是國家首批公立甲等部隊�� �院、全國首批醫療規范定點單位,是第四軍醫大學、東南大� ��等知名高等院校的教學醫院。曾被中國人民解放軍空軍后勤 部衛生部評為衛生工作先進單位,先后兩次榮立集體二等功�� � original issue reported on code google com by gmail com on jun at | 0 |
26,018 | 12,341,137,200 | IssuesEvent | 2020-05-14 21:16:50 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | closed | Intellisense Progressively Gets Slower | Language Service bug performance | **Type: LanguageService**
OS and Version: Windows 10 - Home Edition
VS Code Version: Latest
C/C++ Extension Version: Latest
**Details**
When I first open VSCode, intellisense runs perfectly fine and takes almost no time to come up. As changes to files are made, and as I build and rebuild the source, intellisense gets progressively slower. The little flame icon shows up at the bottom more often and for longer periods of time, eventually getting to the point where typing each character triggers the flame and intellisense is virtually unusable. I have to finish typing and wait for the flame to go away.
If I close VSCode and open it again, I can reset the intellisense to at least be useful again, but only for a short period of time. This is really annoying.
**To Reproduce**
**c_cpp_properties.json**
```json
{
"configurations": [
{
"name": "Win32",
"includePath": [
"C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v7.1A\\Include",
"C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\include",
"C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\crt\\src",
"C:\\Program Files (x86)\\Windows Kits\\8.0\\Include\\shared",
"${workspaceFolder}\\dev\\src\\**"
],
"defines": [],
"intelliSenseMode": "msvc-x64"
}
],
"version": 4
}
```
**Expected behavior**
It really should not take this long for intellisense to come up, especially since first opening VSCode, it seems to run just fine. It shouldnt really be rebuild the whole database either. Includes from CRT and VC
should be built once and then never again as those never change.
| 1.0 | Intellisense Progressively Gets Slower - **Type: LanguageService**
OS and Version: Windows 10 - Home Edition
VS Code Version: Latest
C/C++ Extension Version: Latest
**Details**
When I first open VSCode, intellisense runs perfectly fine and takes almost no time to come up. As changes to files are made, and as I build and rebuild the source, intellisense gets progressively slower. The little flame icon shows up at the bottom more often and for longer periods of time, eventually getting to the point where typing each character triggers the flame and intellisense is virtually unusable. I have to finish typing and wait for the flame to go away.
If I close VSCode and open it again, I can reset the intellisense to at least be useful again, but only for a short period of time. This is really annoying.
**To Reproduce**
**c_cpp_properties.json**
```json
{
"configurations": [
{
"name": "Win32",
"includePath": [
"C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v7.1A\\Include",
"C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\include",
"C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\crt\\src",
"C:\\Program Files (x86)\\Windows Kits\\8.0\\Include\\shared",
"${workspaceFolder}\\dev\\src\\**"
],
"defines": [],
"intelliSenseMode": "msvc-x64"
}
],
"version": 4
}
```
**Expected behavior**
It really should not take this long for intellisense to come up, especially since first opening VSCode, it seems to run just fine. It shouldnt really be rebuild the whole database either. Includes from CRT and VC
should be built once and then never again as those never change.
| non_process | intellisense progressively gets slower type languageservice os and version windows home edition vs code version latest c c extension version latest details when i first open vscode intellisense runs perfectly fine and takes almost no time to come up as changes to files are made and as i build and rebuild the source intellisense gets progressively slower the little flame icon shows up at the bottom more often and for longer periods of time eventually getting to the point where typing each character triggers the flame and intellisense is virtually unusable i have to finish typing and wait for the flame to go away if i close vscode and open it again i can reset the intellisense to at least be useful again but only for a short period of time this is really annoying to reproduce c cpp properties json json configurations name includepath c program files microsoft sdks windows include c program files microsoft visual studio vc include c program files microsoft visual studio crt src c program files windows kits include shared workspacefolder dev src defines intellisensemode msvc version expected behavior it really should not take this long for intellisense to come up especially since first opening vscode it seems to run just fine it shouldnt really be rebuild the whole database either includes from crt and vc should be built once and then never again as those never change | 0 |
4,734 | 7,573,646,163 | IssuesEvent | 2018-04-23 18:25:07 | resin-io/etcher | https://api.github.com/repos/resin-io/etcher | opened | Publish Debian packages to Bintray through Resin CI on every commit | Process priority:low | So we can easily test Etcher Pro updates. | 1.0 | Publish Debian packages to Bintray through Resin CI on every commit - So we can easily test Etcher Pro updates. | process | publish debian packages to bintray through resin ci on every commit so we can easily test etcher pro updates | 1 |
116,722 | 9,882,072,091 | IssuesEvent | 2019-06-24 15:58:58 | microsoft/vscode-remote-release | https://api.github.com/repos/microsoft/vscode-remote-release | opened | Test: Alpine support | containers testplan-item | #54
- [ ] Windows
- [ ] anyOS
Complexity: 4
Check that the container definitions for Alpine work and try some typical actions depending on the definition. (Like compile, dynamically forward a port, IntelliSense.)
TODO @chrmarti: More details. | 1.0 | Test: Alpine support - #54
- [ ] Windows
- [ ] anyOS
Complexity: 4
Check that the container definitions for Alpine work and try some typical actions depending on the definition. (Like compile, dynamically forward a port, IntelliSense.)
TODO @chrmarti: More details. | non_process | test alpine support windows anyos complexity check that the container definitions for alpine work and try some typical actions depending on the definition like compile dynamically forward a port intellisense todo chrmarti more details | 0 |
184,005 | 21,784,782,477 | IssuesEvent | 2022-05-14 01:18:01 | mgh3326/railbook | https://api.github.com/repos/mgh3326/railbook | closed | WS-2021-0152 (High) detected in color-string-1.5.3.tgz - autoclosed | security vulnerability | ## WS-2021-0152 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>color-string-1.5.3.tgz</b></p></summary>
<p>Parser and generator for CSS color strings</p>
<p>Library home page: <a href="https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz">https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/color-string/package.json</p>
<p>
Dependency Hierarchy:
- webpacker-4.2.2.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.3.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-colormin-4.0.3.tgz
- color-3.1.2.tgz
- :x: **color-string-1.5.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/railbook/commit/b604637cd15b3c4cdc89d77ff0375e14c767a9ce">b604637cd15b3c4cdc89d77ff0375e14c767a9ce</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) was found in color-string before 1.5.5.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://github.com/Qix-/color-string/commit/0789e21284c33d89ebc4ab4ca6f759b9375ac9d3>WS-2021-0152</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/Qix-/color-string/releases/tag/1.5.5">https://github.com/Qix-/color-string/releases/tag/1.5.5</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution (color-string): 1.5.5</p>
<p>Direct dependency fix Resolution (@rails/webpacker): 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0152 (High) detected in color-string-1.5.3.tgz - autoclosed - ## WS-2021-0152 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>color-string-1.5.3.tgz</b></p></summary>
<p>Parser and generator for CSS color strings</p>
<p>Library home page: <a href="https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz">https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/color-string/package.json</p>
<p>
Dependency Hierarchy:
- webpacker-4.2.2.tgz (Root Library)
- optimize-css-assets-webpack-plugin-5.0.3.tgz
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-colormin-4.0.3.tgz
- color-3.1.2.tgz
- :x: **color-string-1.5.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/railbook/commit/b604637cd15b3c4cdc89d77ff0375e14c767a9ce">b604637cd15b3c4cdc89d77ff0375e14c767a9ce</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) was found in color-string before 1.5.5.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://github.com/Qix-/color-string/commit/0789e21284c33d89ebc4ab4ca6f759b9375ac9d3>WS-2021-0152</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/Qix-/color-string/releases/tag/1.5.5">https://github.com/Qix-/color-string/releases/tag/1.5.5</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution (color-string): 1.5.5</p>
<p>Direct dependency fix Resolution (@rails/webpacker): 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws high detected in color string tgz autoclosed ws high severity vulnerability vulnerable library color string tgz parser and generator for css color strings library home page a href path to dependency file package json path to vulnerable library node modules color string package json dependency hierarchy webpacker tgz root library optimize css assets webpack plugin tgz cssnano tgz cssnano preset default tgz postcss colormin tgz color tgz x color string tgz vulnerable library found in head commit a href vulnerability details regular expression denial of service redos was found in color string before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution color string direct dependency fix resolution rails webpacker step up your open source security game with whitesource | 0 |
22,159 | 30,700,952,387 | IssuesEvent | 2023-07-26 23:19:01 | esmero/ami | https://api.github.com/repos/esmero/ami | opened | CASV exporter might fail if the CID used by the temporary storage surpasses the max. DB length | bug Find and Replace VBO Actions CSV Processing | # What?
Unheard of before. But I should have known better bc I saw something (and fixed) similar while building the LoD reconciliation service.
During an CSV export, to keep the order of children/parents in place we generate a Batch that uses temporary storage. Temporary storage requires a unique ID per item, and that one (to avoid overlaps while multiple users export at the same time or a single user does the same) is generated using a combination of the Views, the Display ID, etc
See:
https://github.com/esmero/ami/blob/9283bf06670296c29ed3bec43edbaf9769f23947/src/Plugin/Action/AmiStrawberryfieldCSVexport.php#L553-L555
This name, when the Views Machine name + the Display name are very long (happened to me, I promise) will fail badly at the DB level!
(gosh drupal) giving you a truly scare exception
Solution is to reduce the whole thing to an md5() and done.
| 1.0 | CASV exporter might fail if the CID used by the temporary storage surpasses the max. DB length - # What?
Unheard of before. But I should have known better bc I saw something (and fixed) similar while building the LoD reconciliation service.
During an CSV export, to keep the order of children/parents in place we generate a Batch that uses temporary storage. Temporary storage requires a unique ID per item, and that one (to avoid overlaps while multiple users export at the same time or a single user does the same) is generated using a combination of the Views, the Display ID, etc
See:
https://github.com/esmero/ami/blob/9283bf06670296c29ed3bec43edbaf9769f23947/src/Plugin/Action/AmiStrawberryfieldCSVexport.php#L553-L555
This name, when the Views Machine name + the Display name are very long (happened to me, I promise) will fail badly at the DB level!
(gosh drupal) giving you a truly scare exception
Solution is to reduce the whole thing to an md5() and done.
| process | casv exporter might fail if the cid used by the temporary storage surpasses the max db length what unheard of before but i should have known better bc i saw something and fixed similar while building the lod reconciliation service during an csv export to keep the order of children parents in place we generate a batch that uses temporary storage temporary storage requires a unique id per item and that one to avoid overlaps while multiple users export at the same time or a single user does the same is generated using a combination of the views the display id etc see this name when the views machine name the display name are very long happened to me i promise will fail badly at the db level gosh drupal giving you a truly scare exception solution is to reduce the whole thing to an and done | 1 |
52,294 | 22,141,777,031 | IssuesEvent | 2022-06-03 07:42:55 | klubcoin/lcn-mobile | https://api.github.com/repos/klubcoin/lcn-mobile | opened | [Chat Service] Store chat message in server | Chat / Messaging Services | ### **Description:**
- Store chat message in server.
- Add API for user can delete their message in a conversation
- Add API for user can edit their message in a conversation
- Add field/API for user reply a message in a conversation | 1.0 | [Chat Service] Store chat message in server - ### **Description:**
- Store chat message in server.
- Add API for user can delete their message in a conversation
- Add API for user can edit their message in a conversation
- Add field/API for user reply a message in a conversation | non_process | store chat message in server description store chat message in server add api for user can delete their message in a conversation add api for user can edit their message in a conversation add field api for user reply a message in a conversation | 0 |
12,674 | 15,043,448,926 | IssuesEvent | 2021-02-03 00:44:23 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | Segfault on --log-format=CADDY with additional non-JSON fields | bug log-processing log/date/time format | With the following standard log output from CADDY (I just anonymized host and IP)
```
2021/02/02 12:22:53.394 error http.log.access.log0 handled request {"request": {"remote_addr": "11.22.33.44:5678", "proto": "HTTP/2.0", "method": "GET", "host": "example.com", "uri": "/favicon.ico", "headers": {"Cache-Control": ["no-cache"], "Accept": ["image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8"], "Referer": ["https://example.com/"], "Accept-Encoding": ["gzip, deflate, br"], "Pragma": ["no-cache"], "Sec-Ch-Ua": ["\"Chromium\";v=\"88\", \"Google Chrome\";v=\"88\", \";Not A Brand\";v=\"99\""], "Sec-Ch-Ua-Mobile": ["?0"], "User-Agent": ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"], "Sec-Fetch-Site": ["same-origin"], "Sec-Fetch-Mode": ["no-cors"], "Sec-Fetch-Dest": ["image"], "Accept-Language": ["de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7"]}, "tls": {"resumed": false, "version": 772, "cipher_suite": 4865, "proto": "h2", "proto_mutual": true, "server_name": "example.com"}}, "common_log": "11.22.33.44 - - [02/Feb/2021:12:22:53 +0000] \"GET /favicon.ico HTTP/2.0\" 404 0", "duration": 0.000296759, "size": 0, "status": 404, "resp_headers": {"Server": ["Caddy"]}}
```
I get a seg fault that is sometimes displayed after the violet ncurses footer as
```
[PARSING oneline.log] {0} @ {0/s} /Segmentation fault
```
and sometimes as
```
==14530== GoAccess 1.4.5 crashed by Sig 11
==14530==
==14530== VALUES AT CRASH POINT
==14530==
==14530== FILE: oneline.log
==14530== Line number: 0
==14530== Invalid data: 0
==14530== Piping: 0
==14530==
==14530== STACK TRACE:
==14530==
==14530== 0 goaccess(sigsegv_handler+0x16c) [0x5597001e0eec]
==14530== 1 /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730) [0x7f0b4be13730]
==14530== 2 goaccess(+0x1fad3) [0x5597001e8ad3]
==14530== 3 goaccess(+0x356da) [0x5597001fe6da]
==14530== 4 goaccess(parse_json_string+0x21d) [0x559700202efd]
==14530== 5 goaccess(pre_process_log+0x282) [0x5597001fefb2]
==14530== 6 goaccess(+0x368c3) [0x5597001ff8c3]
==14530== 7 goaccess(parse_log+0xc7) [0x5597001ffe37]
==14530== 8 goaccess(main+0x29d) [0x5597001dbf8d]
==14530== 9 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f0b4bc6409b]
==14530== 10 goaccess(_start+0x2a) [0x5597001ddb5a]
==14530==
==14530== Please report it by opening an issue on GitHub:
==14530== https://github.com/allinurl/goaccess/issues
```
I use the mst recent version obtained via the "Official GoAccess' Debian/Ubuntu Repository" method as described on https://goaccess.io/download | 1.0 | Segfault on --log-format=CADDY with additional non-JSON fields - With the following standard log output from CADDY (I just anonymized host and IP)
```
2021/02/02 12:22:53.394 error http.log.access.log0 handled request {"request": {"remote_addr": "11.22.33.44:5678", "proto": "HTTP/2.0", "method": "GET", "host": "example.com", "uri": "/favicon.ico", "headers": {"Cache-Control": ["no-cache"], "Accept": ["image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8"], "Referer": ["https://example.com/"], "Accept-Encoding": ["gzip, deflate, br"], "Pragma": ["no-cache"], "Sec-Ch-Ua": ["\"Chromium\";v=\"88\", \"Google Chrome\";v=\"88\", \";Not A Brand\";v=\"99\""], "Sec-Ch-Ua-Mobile": ["?0"], "User-Agent": ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36"], "Sec-Fetch-Site": ["same-origin"], "Sec-Fetch-Mode": ["no-cors"], "Sec-Fetch-Dest": ["image"], "Accept-Language": ["de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7"]}, "tls": {"resumed": false, "version": 772, "cipher_suite": 4865, "proto": "h2", "proto_mutual": true, "server_name": "example.com"}}, "common_log": "11.22.33.44 - - [02/Feb/2021:12:22:53 +0000] \"GET /favicon.ico HTTP/2.0\" 404 0", "duration": 0.000296759, "size": 0, "status": 404, "resp_headers": {"Server": ["Caddy"]}}
```
I get a seg fault that is sometimes displayed after the violet ncurses footer as
```
[PARSING oneline.log] {0} @ {0/s} /Segmentation fault
```
and sometimes as
```
==14530== GoAccess 1.4.5 crashed by Sig 11
==14530==
==14530== VALUES AT CRASH POINT
==14530==
==14530== FILE: oneline.log
==14530== Line number: 0
==14530== Invalid data: 0
==14530== Piping: 0
==14530==
==14530== STACK TRACE:
==14530==
==14530== 0 goaccess(sigsegv_handler+0x16c) [0x5597001e0eec]
==14530== 1 /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730) [0x7f0b4be13730]
==14530== 2 goaccess(+0x1fad3) [0x5597001e8ad3]
==14530== 3 goaccess(+0x356da) [0x5597001fe6da]
==14530== 4 goaccess(parse_json_string+0x21d) [0x559700202efd]
==14530== 5 goaccess(pre_process_log+0x282) [0x5597001fefb2]
==14530== 6 goaccess(+0x368c3) [0x5597001ff8c3]
==14530== 7 goaccess(parse_log+0xc7) [0x5597001ffe37]
==14530== 8 goaccess(main+0x29d) [0x5597001dbf8d]
==14530== 9 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f0b4bc6409b]
==14530== 10 goaccess(_start+0x2a) [0x5597001ddb5a]
==14530==
==14530== Please report it by opening an issue on GitHub:
==14530== https://github.com/allinurl/goaccess/issues
```
I use the mst recent version obtained via the "Official GoAccess' Debian/Ubuntu Repository" method as described on https://goaccess.io/download | process | segfault on log format caddy with additional non json fields with the following standard log output from caddy i just anonymized host and ip error http log access handled request request remote addr proto http method get host example com uri favicon ico headers cache control accept referer accept encoding pragma sec ch ua sec ch ua mobile user agent sec fetch site sec fetch mode sec fetch dest accept language tls resumed false version cipher suite proto proto mutual true server name example com common log get favicon ico http duration size status resp headers server i get a seg fault that is sometimes displayed after the violet ncurses footer as s segmentation fault and sometimes as goaccess crashed by sig values at crash point file oneline log line number invalid data piping stack trace goaccess sigsegv handler lib linux gnu libpthread so goaccess goaccess goaccess parse json string goaccess pre process log goaccess goaccess parse log goaccess main lib linux gnu libc so libc start main goaccess start please report it by opening an issue on github i use the mst recent version obtained via the official goaccess debian ubuntu repository method as described on | 1 |
31,113 | 11,871,950,892 | IssuesEvent | 2020-03-26 15:07:09 | brave/brave-ios | https://api.github.com/repos/brave/brave-ios | closed | Updating Package-Lock.json | Epic: Security QA/Yes bug release-notes/exclude sec-low security | ### Description:
- `https://github.com/brave/brave-ios/network/alert/package-lock.json/minimist/open`
- Just related to outdated libraries being used across all platforms for millions of applications.
- We updated the packages so we need to make sure stuff still works.
### Steps to Reproduce
1. N/A
**Expected result:**
- Sync should work
**Brave Version:** <!-- Provide full details Eg: v1.4.2(17.09.08.16) -->
- All | True | Updating Package-Lock.json - ### Description:
- `https://github.com/brave/brave-ios/network/alert/package-lock.json/minimist/open`
- Just related to outdated libraries being used across all platforms for millions of applications.
- We updated the packages so we need to make sure stuff still works.
### Steps to Reproduce
1. N/A
**Expected result:**
- Sync should work
**Brave Version:** <!-- Provide full details Eg: v1.4.2(17.09.08.16) -->
- All | non_process | updating package lock json description just related to outdated libraries being used across all platforms for millions of applications we updated the packages so we need to make sure stuff still works steps to reproduce n a expected result sync should work brave version all | 0 |
693,185 | 23,766,105,083 | IssuesEvent | 2022-09-01 12:56:24 | Laravel-Backpack/CRUD | https://api.github.com/repos/Laravel-Backpack/CRUD | closed | [Bug] DataTable row colors are changed and a little broken now | Bug triage Priority: MUST Minor Bug | # Bug report
### What I did
Updated to latest CRUD & PRO.
### What I expected to happen
DataTable look and work the same.
### What happened

Notice:
- the colors have changed (the row that was white before is now dark gray)
- the dark gray rows have a white border in the first column only (that wasn't noticeable before since the row was white)
- on hover, the gray rows stay gray (where before, they turned light purple)
### What I've already tried to fix it
Nothing yet, it's probably a mismatch between the new CSS we published, and our overrides.
### Is it a bug in the latest version of Backpack?
After I run ```composer update backpack/crud``` the bug... is it still there?
Yes
### Backpack, Laravel, PHP, DB version
When I run ```php artisan backpack:version``` the output is:
```
### PHP VERSION:
PHP 8.1.8 (cli) (built: Jul 8 2022 10:46:35) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.1.8, Copyright (c) Zend Technologies
with Zend OPcache v8.1.8, Copyright (c), by Zend Technologies
### LARAVEL VERSION:
v9.27.0@27572f45120fd3977d92651a71d8c711a9aaa790
### BACKPACK VERSION:
5.3.8@48b971b6073ac8fde3b021466b08d6a78283b15c
```
| 1.0 | [Bug] DataTable row colors are changed and a little broken now - # Bug report
### What I did
Updated to latest CRUD & PRO.
### What I expected to happen
DataTable look and work the same.
### What happened

Notice:
- the colors have changed (the row that was white before is now dark gray)
- the dark gray rows have a white border in the first column only (that wasn't noticeable before since the row was white)
- on hover, the gray rows stay gray (where before, they turned light purple)
### What I've already tried to fix it
Nothing yet, it's probably a mismatch between the new CSS we published, and our overrides.
### Is it a bug in the latest version of Backpack?
After I run ```composer update backpack/crud``` the bug... is it still there?
Yes
### Backpack, Laravel, PHP, DB version
When I run ```php artisan backpack:version``` the output is:
```
### PHP VERSION:
PHP 8.1.8 (cli) (built: Jul 8 2022 10:46:35) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.1.8, Copyright (c) Zend Technologies
with Zend OPcache v8.1.8, Copyright (c), by Zend Technologies
### LARAVEL VERSION:
v9.27.0@27572f45120fd3977d92651a71d8c711a9aaa790
### BACKPACK VERSION:
5.3.8@48b971b6073ac8fde3b021466b08d6a78283b15c
```
| non_process | datatable row colors are changed and a little broken now bug report what i did updated to latest crud pro what i expected to happen datatable look and work the same what happened notice the colors have changed the row that was white before is now dark gray the dark gray rows have a white border in the first column only that wasn t noticeable before since the row was white on hover the gray rows stay gray where before they turned light purple what i ve already tried to fix it nothing yet it s probably a mismatch between the new css we published and our overrides is it a bug in the latest version of backpack after i run composer update backpack crud the bug is it still there yes backpack laravel php db version when i run php artisan backpack version the output is php version php cli built jul nts copyright c the php group zend engine copyright c zend technologies with zend opcache copyright c by zend technologies laravel version backpack version | 0 |
304,384 | 9,331,537,220 | IssuesEvent | 2019-03-28 09:59:05 | cs2103-ay1819s2-w09-3/main | https://api.github.com/repos/cs2103-ay1819s2-w09-3/main | closed | Add a Statistics Mode | priority.Medium type.Epic | Statistics Mode will provide an overview of the restaurant's revenue and dish popularity. It will change the scene of the browser panel to display either a monthly statistical card or and table of the overall statistics. Commands related to changing the mode and deriving specific statistics will be valid under this mode. | 1.0 | Add a Statistics Mode - Statistics Mode will provide an overview of the restaurant's revenue and dish popularity. It will change the scene of the browser panel to display either a monthly statistical card or and table of the overall statistics. Commands related to changing the mode and deriving specific statistics will be valid under this mode. | non_process | add a statistics mode statistics mode will provide an overview of the restaurant s revenue and dish popularity it will change the scene of the browser panel to display either a monthly statistical card or and table of the overall statistics commands related to changing the mode and deriving specific statistics will be valid under this mode | 0 |
371,536 | 25,955,227,133 | IssuesEvent | 2022-12-18 05:37:11 | Zaba505/zaba505.github.io | https://api.github.com/repos/Zaba505/zaba505.github.io | closed | story(arsp): add labels to page | documentation enhancement | ### Description
I want to add labels to the markdown page for arsp to make filtering/searching better
### Acceptance Criteria
- [ ] add `wip` label
- [ ] add `math` label
- [ ] add `research` label
### Related Issues
_No response_ | 1.0 | story(arsp): add labels to page - ### Description
I want to add labels to the markdown page for arsp to make filtering/searching better
### Acceptance Criteria
- [ ] add `wip` label
- [ ] add `math` label
- [ ] add `research` label
### Related Issues
_No response_ | non_process | story arsp add labels to page description i want to add labels to the markdown page for arsp to make filtering searching better acceptance criteria add wip label add math label add research label related issues no response | 0 |
18,769 | 24,674,296,521 | IssuesEvent | 2022-10-18 15:47:03 | keras-team/keras-cv | https://api.github.com/repos/keras-team/keras-cv | closed | [Design] Video support for augmentation layers | preprocessing | Some major considerations:
- augmentations like jitter, shear, etc should be consistent (or slowly change) throughout videos.
- performance
- consistency with our ecosystem | 1.0 | [Design] Video support for augmentation layers - Some major considerations:
- augmentations like jitter, shear, etc should be consistent (or slowly change) throughout videos.
- performance
- consistency with our ecosystem | process | video support for augmentation layers some major considerations augmentations like jitter shear etc should be consistent or slowly change throughout videos performance consistency with our ecosystem | 1 |
109,338 | 23,749,362,263 | IssuesEvent | 2022-08-31 19:02:12 | unicode-org/icu4x | https://api.github.com/repos/unicode-org/icu4x | closed | Fix typo: to_code_point_invesion_list | T-bug C-unicode S-tiny | There is a function `to_code_point_invesion_list`. Fix the typo. | 1.0 | Fix typo: to_code_point_invesion_list - There is a function `to_code_point_invesion_list`. Fix the typo. | non_process | fix typo to code point invesion list there is a function to code point invesion list fix the typo | 0 |
43,473 | 11,233,140,908 | IssuesEvent | 2020-01-09 00:07:16 | apache/incubator-mxnet | https://api.github.com/repos/apache/incubator-mxnet | closed | Fail to build mxnet from source | Bug Build CMake | ## Description
Following the steps at https://mxnet.apache.org/get_started/ubuntu_setup
I got the following error:
### Error Message
```
OSError: /usr/lib/liblapack.so.3: undefined symbol: gotoblas
```
## To Reproduce
```
rm -rf build
mkdir -p build && cd build
cmake -GNinja \
-DUSE_CUDA=OFF \
-DUSE_MKL_IF_AVAILABLE=ON \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_BUILD_TYPE=Release \
..
ninja
```
then
```
python -c "import mxnet"
```
## Environment
We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:
```
----------Python Info----------
Version : 3.6.6
Compiler : GCC 7.2.0
Build : ('default', 'Jun 28 2018 17:14:51')
Arch : ('64bit', '')
------------Pip Info-----------
Version : 19.3.1
Directory : /home/ubuntu/.virtualenvs/mxnet/lib/python3.6/site-packages/pip
----------MXNet Info-----------
Hashtag not found. Not installed from pre-built package.
----------System Info----------
Platform : Linux-4.4.0-1094-aws-x86_64-with-debian-stretch-sid
system : Linux
node : ip-172-31-15-220
release : 4.4.0-1094-aws
version : #105-Ubuntu SMP Mon Sep 16 13:08:01 UTC 2019
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2699.984
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.09
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0131 sec, LOAD: 0.4585 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0005 sec, LOAD: 0.4164 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.2340 sec, LOAD: 0.3821 sec.
Timing for D2L: http://d2l.ai, DNS: 0.1774 sec, LOAD: 0.1042 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0059 sec, LOAD: 0.2229 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0982 sec, LOAD: 0.1264 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0173 sec, LOAD: 0.3984 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0106 sec, LOAD: 0.0637 sec.
```
| 1.0 | Fail to build mxnet from source - ## Description
Following the steps at https://mxnet.apache.org/get_started/ubuntu_setup
I got the following error:
### Error Message
```
OSError: /usr/lib/liblapack.so.3: undefined symbol: gotoblas
```
## To Reproduce
```
rm -rf build
mkdir -p build && cd build
cmake -GNinja \
-DUSE_CUDA=OFF \
-DUSE_MKL_IF_AVAILABLE=ON \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_BUILD_TYPE=Release \
..
ninja
```
then
```
python -c "import mxnet"
```
## Environment
We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:
```
----------Python Info----------
Version : 3.6.6
Compiler : GCC 7.2.0
Build : ('default', 'Jun 28 2018 17:14:51')
Arch : ('64bit', '')
------------Pip Info-----------
Version : 19.3.1
Directory : /home/ubuntu/.virtualenvs/mxnet/lib/python3.6/site-packages/pip
----------MXNet Info-----------
Hashtag not found. Not installed from pre-built package.
----------System Info----------
Platform : Linux-4.4.0-1094-aws-x86_64-with-debian-stretch-sid
system : Linux
node : ip-172-31-15-220
release : 4.4.0-1094-aws
version : #105-Ubuntu SMP Mon Sep 16 13:08:01 UTC 2019
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2699.984
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.09
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0131 sec, LOAD: 0.4585 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0005 sec, LOAD: 0.4164 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.2340 sec, LOAD: 0.3821 sec.
Timing for D2L: http://d2l.ai, DNS: 0.1774 sec, LOAD: 0.1042 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0059 sec, LOAD: 0.2229 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0982 sec, LOAD: 0.1264 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0173 sec, LOAD: 0.3984 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0106 sec, LOAD: 0.0637 sec.
```
| non_process | fail to build mxnet from source description following the steps at i got the following error error message oserror usr lib liblapack so undefined symbol gotoblas to reproduce rm rf build mkdir p build cd build cmake gninja duse cuda off duse mkl if available on dcmake cuda compiler launcher ccache dcmake c compiler launcher ccache dcmake cxx compiler launcher ccache dcmake build type release ninja then python c import mxnet environment we recommend using our script for collecting the diagnositc information run the following command and paste the outputs below python info version compiler gcc build default jun arch pip info version directory home ubuntu virtualenvs mxnet lib site packages pip mxnet info hashtag not found not installed from pre built package system info platform linux aws with debian stretch sid system linux node ip release aws version ubuntu smp mon sep utc hardware info machine processor architecture cpu op mode s bit bit byte order little endian cpu s on line cpu s list thread s per core core s per socket socket s numa node s vendor id genuineintel cpu family model model name intel r xeon r cpu stepping cpu mhz cpu max mhz cpu min mhz bogomips hypervisor vendor xen virtualization type full cache cache cache cache numa cpu s flags fpu vme de pse tsc msr pae mce apic sep mtrr pge mca cmov pat clflush mmx fxsr sse ht syscall nx rdtscp lm constant tsc rep good nopl xtopology nonstop tsc aperfmperf pni pclmulqdq fma pcid movbe popcnt tsc deadline timer aes xsave avx rdrand hypervisor lahf lm abm invpcid single pti fsgsbase hle smep erms invpcid rtm rdseed adx xsaveopt network test setting timeout timing for mxnet dns sec load sec timing for gluonnlp github dns sec load sec timing for gluonnlp dns sec load sec timing for dns sec load sec timing for zh cn dns sec load sec timing for fashionmnist dns sec load sec timing for pypi dns sec load sec timing for conda dns sec load sec | 0 |
9,571 | 12,521,697,370 | IssuesEvent | 2020-06-03 17:50:48 | googleapis/synthtool | https://api.github.com/repos/googleapis/synthtool | closed | Node.js, add environment variable file to .kokoro | type: process | We should figure out a reasonable approach to managing environment variables, such that new environment variables can be added as needed [see](https://github.com/googleapis/nodejs-secret-manager/pull/34).
The [nodejs-docs-samples use a common build.sh file](
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/blob/master/.kokoro/build.sh), I think we could do something similar, perhaps it's enough to simply allow folks to start adding some additional environment to our sample test shell file.
CC: @grayside, @sethvargo | 1.0 | Node.js, add environment variable file to .kokoro - We should figure out a reasonable approach to managing environment variables, such that new environment variables can be added as needed [see](https://github.com/googleapis/nodejs-secret-manager/pull/34).
The [nodejs-docs-samples use a common build.sh file](
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/blob/master/.kokoro/build.sh), I think we could do something similar, perhaps it's enough to simply allow folks to start adding some additional environment to our sample test shell file.
CC: @grayside, @sethvargo | process | node js add environment variable file to kokoro we should figure out a reasonable approach to managing environment variables such that new environment variables can be added as needed the i think we could do something similar perhaps it s enough to simply allow folks to start adding some additional environment to our sample test shell file cc grayside sethvargo | 1 |
647,686 | 21,133,953,278 | IssuesEvent | 2022-04-06 03:30:03 | lowRISC/opentitan | https://api.github.com/repos/lowRISC/opentitan | closed | [cleanup] Add assertion check for all output enables which are tied to 1 | Priority:P2 Type:Cleanup | This is an item from Sysrst_Ctrl review.
We can consider updating the checklist as well.
In this [ASSERT_KNOWN_ADDED](https://github.com/lowRISC/opentitan/blob/master/doc/project/checklist.md#assert_known_added), if it's tied to a fixed value, instead of checking it's a known value, we can check it with the actual value.
| 1.0 | [cleanup] Add assertion check for all output enables which are tied to 1 - This is an item from Sysrst_Ctrl review.
We can consider updating the checklist as well.
In this [ASSERT_KNOWN_ADDED](https://github.com/lowRISC/opentitan/blob/master/doc/project/checklist.md#assert_known_added), if it's tied to a fixed value, instead of checking it's a known value, we can check it with the actual value.
| non_process | add assertion check for all output enables which are tied to this is an item from sysrst ctrl review we can consider updating the checklist as well in this if it s tied to a fixed value instead of checking it s a known value we can check it with the actual value | 0 |
19,865 | 26,277,601,970 | IssuesEvent | 2023-01-07 00:46:35 | AssetRipper/AssetRipper | https://api.github.com/repos/AssetRipper/AssetRipper | opened | Child State Machines | enhancement animation processing | ### Describe the new feature or enhancement
Currently, complex animator controllers get exported like this:

Ideally, they should be more like this:

Each of the hexagons represents a child state machine. Here are some more images of how the output is supposed to look. All these images are for the same `AnimatorController` asset.




The release `AnimatorController` asset structure is complex and sparsely documented, but I investigated some of it a few months ago. From what I could ascertain, the child state machine nodes can determined only with string parsing. I will edit this with any information I uncover in future investigations until a solution is implemented. | 1.0 | Child State Machines - ### Describe the new feature or enhancement
Currently, complex animator controllers get exported like this:

Ideally, they should be more like this:

Each of the hexagons represents a child state machine. Here are some more images of how the output is supposed to look. All these images are for the same `AnimatorController` asset.




The release `AnimatorController` asset structure is complex and sparsely documented, but I investigated some of it a few months ago. From what I could ascertain, the child state machine nodes can determined only with string parsing. I will edit this with any information I uncover in future investigations until a solution is implemented. | process | child state machines describe the new feature or enhancement currently complex animator controllers get exported like this ideally they should be more like this each of the hexagons represents a child state machine here are some more images of how the output is supposed to look all these images are for the same animatorcontroller asset the release animatorcontroller asset structure is complex and sparsely documented but i investigated some of it a few months ago from what i could ascertain the child state machine nodes can determined only with string parsing i will edit this with any information i uncover in future investigations until a solution is implemented | 1 |
15,698 | 19,848,219,666 | IssuesEvent | 2022-01-21 09:20:08 | ooi-data/CE07SHSM-MFD35-05-PCO2WB000-recovered_host-pco2w_abc_dcl_instrument_blank_recovered | https://api.github.com/repos/ooi-data/CE07SHSM-MFD35-05-PCO2WB000-recovered_host-pco2w_abc_dcl_instrument_blank_recovered | opened | 🛑 Processing failed: ValueError | process | ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:20:07.633700.
## Details
Flow name: `CE07SHSM-MFD35-05-PCO2WB000-recovered_host-pco2w_abc_dcl_instrument_blank_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
| 1.0 | 🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:20:07.633700.
## Details
Flow name: `CE07SHSM-MFD35-05-PCO2WB000-recovered_host-pco2w_abc_dcl_instrument_blank_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
| process | 🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host abc dcl instrument blank recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got | 1 |
186,412 | 21,933,200,956 | IssuesEvent | 2022-05-23 11:38:05 | onokatio/static.katio.net | https://api.github.com/repos/onokatio/static.katio.net | closed | WS-2019-0331 (Medium) detected in handlebars-4.0.11.js | security vulnerability | ## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.js</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.11/handlebars.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.11/handlebars.js</a></p>
<p>Path to dependency file: static.katio.net/node_modules/yaml-front-matter/docs/index.html</p>
<p>Path to vulnerable library: static.katio.net/node_modules/yaml-front-matter/docs/js/handlebars.js</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.0.11.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/onokatio/static.katio.net/commit/300a86136076ce4dac47830a9d3686f72c18b97b">300a86136076ce4dac47830a9d3686f72c18b97b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-13
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0331 (Medium) detected in handlebars-4.0.11.js - ## WS-2019-0331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.js</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.11/handlebars.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.11/handlebars.js</a></p>
<p>Path to dependency file: static.katio.net/node_modules/yaml-front-matter/docs/index.html</p>
<p>Path to vulnerable library: static.katio.net/node_modules/yaml-front-matter/docs/js/handlebars.js</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.0.11.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/onokatio/static.katio.net/commit/300a86136076ce4dac47830a9d3686f72c18b97b">300a86136076ce4dac47830a9d3686f72c18b97b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Arbitrary Code Execution vulnerability found in handlebars before 4.5.2. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-13
<p>URL: <a href=https://github.com/wycats/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: handlebars - 4.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws medium detected in handlebars js ws medium severity vulnerability vulnerable library handlebars js handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file static katio net node modules yaml front matter docs index html path to vulnerable library static katio net node modules yaml front matter docs js handlebars js dependency hierarchy x handlebars js vulnerable library found in head commit a href found in base branch master vulnerability details arbitrary code execution vulnerability found in handlebars before lookup helper fails to validate templates attack may submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource | 0 |
17,891 | 10,850,424,617 | IssuesEvent | 2019-11-13 08:48:20 | Azure/azure-cli-extensions | https://api.github.com/repos/Azure/azure-cli-extensions | closed | Fail to login using az devops login and a PAT | Service Attention extension/devops | ### Extension name (the extension in question)
- az devops
### Description of issue (in as much detail as possible)
- Logging in using `az devops login` and a PAT is unsuccessful.
As a result, it fails in further steps (for example, a failure during `az artifacts universal download`)
Attempted installation via the following installation modes:
- `az extension add devops`
- `az extension add -y --source azure_devops-0.14.0-py2.py3-none-any.whl`
#### Debug logs for `az devops login`
```
Command arguments: ['devops', 'login', '--debug']
Event: Cli.PreExecute []
Event: CommandParser.OnGlobalArgumentsCreate [<function CLILogging.on_global_arguments at 0x7f240e91c1e0>, <function OutputProducer.on_global_arguments at 0x7f240e493d90>, <function CLIQuery.on_global_arguments at 0x7f240e234e18>]
Event: CommandInvoker.OnPreCommandTableCreate []
Installed command modules ['acr', 'acs', 'advisor', 'ams', 'apim', 'appconfig', 'appservice', 'backup', 'batch', 'batchai', 'billing', 'botservice', 'cdn', 'cloud', 'cognitiveservices', 'configure', 'consumption', 'container', 'cosmosdb', 'deploymentmanager', 'dla', 'dls', 'dms', 'eventgrid', 'eventhubs', 'extension', 'feedback', 'find', 'hdinsight', 'interactive', 'iot', 'iotcentral', 'keyvault', 'kusto', 'lab', 'managedservices', 'maps', 'monitor', 'natgateway', 'netappfiles', 'network', 'policyinsights', 'privatedns', 'profile', 'rdbms', 'redis', 'relay', 'reservations', 'resource', 'role', 'search', 'security', 'servicebus', 'servicefabric', 'signalr', 'sql', 'sqlvm', 'storage', 'vm']
Loaded module 'acr' in 0.006 seconds.
Loaded module 'acs' in 0.016 seconds.
Loaded module 'advisor' in 0.001 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'ams' in 0.007 seconds.
Loaded module 'apim' in 0.002 seconds.
Loaded module 'appconfig' in 0.003 seconds.
Loaded module 'appservice' in 0.009 seconds.
Loaded module 'backup' in 0.003 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'batch' in 0.008 seconds.
Loaded module 'batchai' in 0.003 seconds.
Loaded module 'billing' in 0.001 seconds.
Loaded module 'botservice' in 0.003 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'cdn' in 0.002 seconds.
Loaded module 'cloud' in 0.001 seconds.
Loaded module 'cognitiveservices' in 0.001 seconds.
Loaded module 'configure' in 0.001 seconds.
Loaded module 'consumption' in 0.002 seconds.
Loaded module 'container' in 0.002 seconds.
Loaded module 'cosmosdb' in 0.007 seconds.
Loaded module 'deploymentmanager' in 0.003 seconds.
Loaded module 'dla' in 0.004 seconds.
Loaded module 'dls' in 0.003 seconds.
Loaded module 'dms' in 0.002 seconds.
Loaded module 'eventgrid' in 0.002 seconds.
Loaded module 'eventhubs' in 0.003 seconds.
Loaded module 'extension' in 0.001 seconds.
Loaded module 'feedback' in 0.001 seconds.
Loaded module 'find' in 0.001 seconds.
Loaded module 'hdinsight' in 0.002 seconds.
Loaded module 'interactive' in 0.000 seconds.
Loaded module 'iot' in 0.004 seconds.
Loaded module 'iotcentral' in 0.001 seconds.
Loaded module 'keyvault' in 0.006 seconds.
Loaded module 'kusto' in 0.002 seconds.
Loaded module 'lab' in 0.003 seconds.
Loaded module 'managedservices' in 0.001 seconds.
Loaded module 'maps' in 0.002 seconds.
Loaded module 'monitor' in 0.005 seconds.
Loaded module 'natgateway' in 0.001 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'netappfiles' in 0.003 seconds.
Loaded module 'network' in 0.031 seconds.
Loaded module 'policyinsights' in 0.002 seconds.
Loaded module 'privatedns' in 0.004 seconds.
Loaded module 'profile' in 0.001 seconds.
Loaded module 'rdbms' in 0.005 seconds.
Loaded module 'redis' in 0.002 seconds.
Loaded module 'relay' in 0.003 seconds.
Loaded module 'reservations' in 0.001 seconds.
Loaded module 'resource' in 0.007 seconds.
Loaded module 'role' in 0.005 seconds.
Loaded module 'search' in 0.002 seconds.
Loaded module 'security' in 0.002 seconds.
Loaded module 'servicebus' in 0.005 seconds.
Loaded module 'servicefabric' in 0.002 seconds.
Loaded module 'signalr' in 0.001 seconds.
Loaded module 'sql' in 0.008 seconds.
Loaded module 'sqlvm' in 0.002 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'storage' in 0.031 seconds.
Loaded module 'vm' in 0.014 seconds.
Loaded all modules in 0.257 seconds. (note: there's always an overhead with the first module loaded)
Extensions directory: '/root/.azure/cliextensions'
Found 1 extensions: ['azure-devops']
Extensions directory: '/root/.azure/cliextensions'
Extension compatibility result: is_compatible=True cli_core_version=2.0.76 min_required=2.0.69 max_required=None
Extensions directory: '/root/.azure/cliextensions'
Loaded extension 'azure-devops' in 0.022 seconds.
Event: CommandInvoker.OnPreCommandTableTruncate [<function AzCliLogging.init_command_file_logging at 0x7f240e1bf1e0>]
az_command_data_logger : command args: devops login --debug
metadata file logging enabled - writing logs to '/root/.azure/commands'.
Event: CommandInvoker.OnPreArgumentLoad [<function register_global_subscription_argument.<locals>.add_subscription_parameter at 0x7f240e1d7f28>]
Event: CommandInvoker.OnPostArgumentLoad []
Event: CommandInvoker.OnPostCommandTableCreate [<function register_ids_argument.<locals>.add_ids_arguments at 0x7f240e194a60>, <function register_cache_arguments.<locals>.add_cache_arguments at 0x7f240e194b70>]
Event: CommandInvoker.OnCommandTableLoaded []
Event: CommandInvoker.OnPreParseArgs [<function _documentdb_deprecate at 0x7f240c03c1e0>]
Event: CommandInvoker.OnPostParseArgs [<function OutputProducer.handle_output_argument at 0x7f240e493e18>, <function CLIQuery.handle_query_parameter at 0x7f240e234ea0>, <function register_ids_argument.<locals>.parse_ids_arguments at 0x7f240e194ae8>, <function handler at 0x7f240bf09b70>, <function DevCommandsLoader.post_parse_args at 0x7f240b715620>]
Extensions directory: '/root/.azure/cliextensions'
Extensions directory: '/root/.azure/cliextensions'
az_command_data_logger : extension name: azure-devops
az_command_data_logger : extension version: 0.14.0
No tty available.
Getting PAT token in non-interactive mode.
Event: CommandInvoker.OnTransformResult [<function _resource_group_transform at 0x7f240e18b488>, <function _x509_from_base64_to_hex_transform at 0x7f240e18b510>]
Event: CommandInvoker.OnFilterResult []
Event: Cli.PostExecute []
az_command_data_logger : exit code: 0
Suppress exception There are no active accounts.
Suppress exception Please run 'az login' to setup account.
telemetry.save : Save telemetry record of length 2372 in cache
telemetry.check : Negative: The /root/.azure/telemetry.txt was modified at 2019-11-11 12:04:11.980324, which in less than 600.000000 s
command ran in 0.347 seconds.
```
### Additional System information
**OS:** CentOS Linux 7.7.1908
#### Contents of `az --help`
```
azure-cli 2.0.76
command-modules-nspkg 2.0.3
core 2.0.76
nspkg 3.0.4
telemetry 1.0.4
Extensions:
azure-devops 0.14.0
Python (Linux) 3.6.8 (default, Aug 7 2019, 17:28:10)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
-----
| 1.0 | Fail to login using az devops login and a PAT - ### Extension name (the extension in question)
- az devops
### Description of issue (in as much detail as possible)
- Logging in using `az devops login` and a PAT is unsuccessful.
As a result, it fails in further steps (for example, a failure during `az artifacts universal download`)
Attempted installation via the following installation modes:
- `az extension add devops`
- `az extension add -y --source azure_devops-0.14.0-py2.py3-none-any.whl`
#### Debug logs for `az devops login`
```
Command arguments: ['devops', 'login', '--debug']
Event: Cli.PreExecute []
Event: CommandParser.OnGlobalArgumentsCreate [<function CLILogging.on_global_arguments at 0x7f240e91c1e0>, <function OutputProducer.on_global_arguments at 0x7f240e493d90>, <function CLIQuery.on_global_arguments at 0x7f240e234e18>]
Event: CommandInvoker.OnPreCommandTableCreate []
Installed command modules ['acr', 'acs', 'advisor', 'ams', 'apim', 'appconfig', 'appservice', 'backup', 'batch', 'batchai', 'billing', 'botservice', 'cdn', 'cloud', 'cognitiveservices', 'configure', 'consumption', 'container', 'cosmosdb', 'deploymentmanager', 'dla', 'dls', 'dms', 'eventgrid', 'eventhubs', 'extension', 'feedback', 'find', 'hdinsight', 'interactive', 'iot', 'iotcentral', 'keyvault', 'kusto', 'lab', 'managedservices', 'maps', 'monitor', 'natgateway', 'netappfiles', 'network', 'policyinsights', 'privatedns', 'profile', 'rdbms', 'redis', 'relay', 'reservations', 'resource', 'role', 'search', 'security', 'servicebus', 'servicefabric', 'signalr', 'sql', 'sqlvm', 'storage', 'vm']
Loaded module 'acr' in 0.006 seconds.
Loaded module 'acs' in 0.016 seconds.
Loaded module 'advisor' in 0.001 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'ams' in 0.007 seconds.
Loaded module 'apim' in 0.002 seconds.
Loaded module 'appconfig' in 0.003 seconds.
Loaded module 'appservice' in 0.009 seconds.
Loaded module 'backup' in 0.003 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'batch' in 0.008 seconds.
Loaded module 'batchai' in 0.003 seconds.
Loaded module 'billing' in 0.001 seconds.
Loaded module 'botservice' in 0.003 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'cdn' in 0.002 seconds.
Loaded module 'cloud' in 0.001 seconds.
Loaded module 'cognitiveservices' in 0.001 seconds.
Loaded module 'configure' in 0.001 seconds.
Loaded module 'consumption' in 0.002 seconds.
Loaded module 'container' in 0.002 seconds.
Loaded module 'cosmosdb' in 0.007 seconds.
Loaded module 'deploymentmanager' in 0.003 seconds.
Loaded module 'dla' in 0.004 seconds.
Loaded module 'dls' in 0.003 seconds.
Loaded module 'dms' in 0.002 seconds.
Loaded module 'eventgrid' in 0.002 seconds.
Loaded module 'eventhubs' in 0.003 seconds.
Loaded module 'extension' in 0.001 seconds.
Loaded module 'feedback' in 0.001 seconds.
Loaded module 'find' in 0.001 seconds.
Loaded module 'hdinsight' in 0.002 seconds.
Loaded module 'interactive' in 0.000 seconds.
Loaded module 'iot' in 0.004 seconds.
Loaded module 'iotcentral' in 0.001 seconds.
Loaded module 'keyvault' in 0.006 seconds.
Loaded module 'kusto' in 0.002 seconds.
Loaded module 'lab' in 0.003 seconds.
Loaded module 'managedservices' in 0.001 seconds.
Loaded module 'maps' in 0.002 seconds.
Loaded module 'monitor' in 0.005 seconds.
Loaded module 'natgateway' in 0.001 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'netappfiles' in 0.003 seconds.
Loaded module 'network' in 0.031 seconds.
Loaded module 'policyinsights' in 0.002 seconds.
Loaded module 'privatedns' in 0.004 seconds.
Loaded module 'profile' in 0.001 seconds.
Loaded module 'rdbms' in 0.005 seconds.
Loaded module 'redis' in 0.002 seconds.
Loaded module 'relay' in 0.003 seconds.
Loaded module 'reservations' in 0.001 seconds.
Loaded module 'resource' in 0.007 seconds.
Loaded module 'role' in 0.005 seconds.
Loaded module 'search' in 0.002 seconds.
Loaded module 'security' in 0.002 seconds.
Loaded module 'servicebus' in 0.005 seconds.
Loaded module 'servicefabric' in 0.002 seconds.
Loaded module 'signalr' in 0.001 seconds.
Loaded module 'sql' in 0.008 seconds.
Loaded module 'sqlvm' in 0.002 seconds.
Event: CommandLoader.OnLoadCommandTable []
Loaded module 'storage' in 0.031 seconds.
Loaded module 'vm' in 0.014 seconds.
Loaded all modules in 0.257 seconds. (note: there's always an overhead with the first module loaded)
Extensions directory: '/root/.azure/cliextensions'
Found 1 extensions: ['azure-devops']
Extensions directory: '/root/.azure/cliextensions'
Extension compatibility result: is_compatible=True cli_core_version=2.0.76 min_required=2.0.69 max_required=None
Extensions directory: '/root/.azure/cliextensions'
Loaded extension 'azure-devops' in 0.022 seconds.
Event: CommandInvoker.OnPreCommandTableTruncate [<function AzCliLogging.init_command_file_logging at 0x7f240e1bf1e0>]
az_command_data_logger : command args: devops login --debug
metadata file logging enabled - writing logs to '/root/.azure/commands'.
Event: CommandInvoker.OnPreArgumentLoad [<function register_global_subscription_argument.<locals>.add_subscription_parameter at 0x7f240e1d7f28>]
Event: CommandInvoker.OnPostArgumentLoad []
Event: CommandInvoker.OnPostCommandTableCreate [<function register_ids_argument.<locals>.add_ids_arguments at 0x7f240e194a60>, <function register_cache_arguments.<locals>.add_cache_arguments at 0x7f240e194b70>]
Event: CommandInvoker.OnCommandTableLoaded []
Event: CommandInvoker.OnPreParseArgs [<function _documentdb_deprecate at 0x7f240c03c1e0>]
Event: CommandInvoker.OnPostParseArgs [<function OutputProducer.handle_output_argument at 0x7f240e493e18>, <function CLIQuery.handle_query_parameter at 0x7f240e234ea0>, <function register_ids_argument.<locals>.parse_ids_arguments at 0x7f240e194ae8>, <function handler at 0x7f240bf09b70>, <function DevCommandsLoader.post_parse_args at 0x7f240b715620>]
Extensions directory: '/root/.azure/cliextensions'
Extensions directory: '/root/.azure/cliextensions'
az_command_data_logger : extension name: azure-devops
az_command_data_logger : extension version: 0.14.0
No tty available.
Getting PAT token in non-interactive mode.
Event: CommandInvoker.OnTransformResult [<function _resource_group_transform at 0x7f240e18b488>, <function _x509_from_base64_to_hex_transform at 0x7f240e18b510>]
Event: CommandInvoker.OnFilterResult []
Event: Cli.PostExecute []
az_command_data_logger : exit code: 0
Suppress exception There are no active accounts.
Suppress exception Please run 'az login' to setup account.
telemetry.save : Save telemetry record of length 2372 in cache
telemetry.check : Negative: The /root/.azure/telemetry.txt was modified at 2019-11-11 12:04:11.980324, which in less than 600.000000 s
command ran in 0.347 seconds.
```
### Additional System information
**OS:** CentOS Linux 7.7.1908
#### Contents of `az --help`
```
azure-cli 2.0.76
command-modules-nspkg 2.0.3
core 2.0.76
nspkg 3.0.4
telemetry 1.0.4
Extensions:
azure-devops 0.14.0
Python (Linux) 3.6.8 (default, Aug 7 2019, 17:28:10)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
-----
| non_process | fail to login using az devops login and a pat extension name the extension in question az devops description of issue in as much detail as possible logging in using az devops login and a pat is unsuccessful as a result it fails in further steps for example a failure during az artifacts universal download attempted installation via the following installation modes az extension add devops az extension add y source azure devops none any whl debug logs for az devops login command arguments event cli preexecute event commandparser onglobalargumentscreate event commandinvoker onprecommandtablecreate installed command modules loaded module acr in seconds loaded module acs in seconds loaded module advisor in seconds event commandloader onloadcommandtable loaded module ams in seconds loaded module apim in seconds loaded module appconfig in seconds loaded module appservice in seconds loaded module backup in seconds event commandloader onloadcommandtable loaded module batch in seconds loaded module batchai in seconds loaded module billing in seconds loaded module botservice in seconds event commandloader onloadcommandtable loaded module cdn in seconds loaded module cloud in seconds loaded module cognitiveservices in seconds loaded module configure in seconds loaded module consumption in seconds loaded module container in seconds loaded module cosmosdb in seconds loaded module deploymentmanager in seconds loaded module dla in seconds loaded module dls in seconds loaded module dms in seconds loaded module eventgrid in seconds loaded module eventhubs in seconds loaded module extension in seconds loaded module feedback in seconds loaded module find in seconds loaded module hdinsight in seconds loaded module interactive in seconds loaded module iot in seconds loaded module iotcentral in seconds loaded module keyvault in seconds loaded module kusto in seconds loaded module lab in seconds loaded module managedservices in seconds loaded module maps in seconds loaded module monitor in seconds loaded module natgateway in seconds event commandloader onloadcommandtable loaded module netappfiles in seconds loaded module network in seconds loaded module policyinsights in seconds loaded module privatedns in seconds loaded module profile in seconds loaded module rdbms in seconds loaded module redis in seconds loaded module relay in seconds loaded module reservations in seconds loaded module resource in seconds loaded module role in seconds loaded module search in seconds loaded module security in seconds loaded module servicebus in seconds loaded module servicefabric in seconds loaded module signalr in seconds loaded module sql in seconds loaded module sqlvm in seconds event commandloader onloadcommandtable loaded module storage in seconds loaded module vm in seconds loaded all modules in seconds note there s always an overhead with the first module loaded extensions directory root azure cliextensions found extensions extensions directory root azure cliextensions extension compatibility result is compatible true cli core version min required max required none extensions directory root azure cliextensions loaded extension azure devops in seconds event commandinvoker onprecommandtabletruncate az command data logger command args devops login debug metadata file logging enabled writing logs to root azure commands event commandinvoker onpreargumentload event commandinvoker onpostargumentload event commandinvoker onpostcommandtablecreate event commandinvoker oncommandtableloaded event commandinvoker onpreparseargs event commandinvoker onpostparseargs extensions directory root azure cliextensions extensions directory root azure cliextensions az command data logger extension name azure devops az command data logger extension version no tty available getting pat token in non interactive mode event commandinvoker ontransformresult event commandinvoker onfilterresult event cli postexecute az command data logger exit code suppress exception there are no active accounts suppress exception please run az login to setup account telemetry save save telemetry record of length in cache telemetry check negative the root azure telemetry txt was modified at which in less than s command ran in seconds additional system information os centos linux contents of az help azure cli command modules nspkg core nspkg telemetry extensions azure devops python linux default aug | 0 |
17,331 | 23,147,245,252 | IssuesEvent | 2022-07-29 03:09:22 | MicrosoftDocs/windows-uwp | https://api.github.com/repos/MicrosoftDocs/windows-uwp | closed | appUriHandlers and ShellExecuteEx | uwp/prod processes-and-threading/tech Pri1 | Hi there, I was wondering if someone could clarify what this section is trying to say, referring to setting up an http(s) app URI handler:
> This feature works whenever your app is a UWP app launched with [LaunchUriAsync](https://docs.microsoft.com/en-us/uwp/api/windows.system.launcher.launchuriasync) or a Windows desktop app launched with [ShellExecuteEx](https://docs.microsoft.com/en-us/windows/desktop/api/shellapi/nf-shellapi-shellexecuteexa). If the URL corresponds to a registered App URI handler, the app will be launched instead of the browser.
Is this saying that both UWP and Windows Desktop apps can be launched in this way?
If so, is there any documentation on how a Windows desktop app can register for one of these schemes? I've been trying to build an app this way using [Tauri](https://github.com/tauri-apps/tauri) but am very stuck.
Thanks,
Adam
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: c7e251da-3cab-4aa0-2915-21189cf71bef
* Version Independent ID: b244b311-a0e5-185d-df82-1dadf3fe6416
* Content: [Enable apps for websites using app URI handlers - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/web-to-app-linking)
* Content Source: [windows-apps-src/launch-resume/web-to-app-linking.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/web-to-app-linking.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft** | 1.0 | appUriHandlers and ShellExecuteEx - Hi there, I was wondering if someone could clarify what this section is trying to say, referring to setting up an http(s) app URI handler:
> This feature works whenever your app is a UWP app launched with [LaunchUriAsync](https://docs.microsoft.com/en-us/uwp/api/windows.system.launcher.launchuriasync) or a Windows desktop app launched with [ShellExecuteEx](https://docs.microsoft.com/en-us/windows/desktop/api/shellapi/nf-shellapi-shellexecuteexa). If the URL corresponds to a registered App URI handler, the app will be launched instead of the browser.
Is this saying that both UWP and Windows Desktop apps can be launched in this way?
If so, is there any documentation on how a Windows desktop app can register for one of these schemes? I've been trying to build an app this way using [Tauri](https://github.com/tauri-apps/tauri) but am very stuck.
Thanks,
Adam
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: c7e251da-3cab-4aa0-2915-21189cf71bef
* Version Independent ID: b244b311-a0e5-185d-df82-1dadf3fe6416
* Content: [Enable apps for websites using app URI handlers - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/web-to-app-linking)
* Content Source: [windows-apps-src/launch-resume/web-to-app-linking.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/web-to-app-linking.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft** | process | appurihandlers and shellexecuteex hi there i was wondering if someone could clarify what this section is trying to say referring to setting up an http s app uri handler this feature works whenever your app is a uwp app launched with or a windows desktop app launched with if the url corresponds to a registered app uri handler the app will be launched instead of the browser is this saying that both uwp and windows desktop apps can be launched in this way if so is there any documentation on how a windows desktop app can register for one of these schemes i ve been trying to build an app this way using but am very stuck thanks adam document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login alvinashcraft microsoft alias aashcraft | 1 |
13,443 | 15,882,621,551 | IssuesEvent | 2021-04-09 16:15:42 | plazi/arcadia-project | https://api.github.com/repos/plazi/arcadia-project | closed | processing: The head anatomy of Protanilla lini (Hymenoptera: Formicidae: Lep | processing input | can you please process this article:
https://myrmecologicalnews.org/cms/index.php?option=com_download&view=download&filename=volume31/mn31_85-114_printable.pdf&format=raw
not urgent (within a week)
Level: subsection, upload to BLR | 1.0 | processing: The head anatomy of Protanilla lini (Hymenoptera: Formicidae: Lep - can you please process this article:
https://myrmecologicalnews.org/cms/index.php?option=com_download&view=download&filename=volume31/mn31_85-114_printable.pdf&format=raw
not urgent (within a week)
Level: subsection, upload to BLR | process | processing the head anatomy of protanilla lini hymenoptera formicidae lep can you please process this article not urgent within a week level subsection upload to blr | 1 |
19,517 | 25,828,899,204 | IssuesEvent | 2022-12-12 14:49:36 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | "Create New Integrated Terminal (Local)" does not apply the correct path to the shell executable | bug remote terminal-process | <!-- Please search existing issues to avoid creating duplicates, and review our troubleshooting tips: https://code.visualstudio.com/docs/remote/troubleshooting -->
<!-- Please attach logs to help us diagnose your issue. Learn more here: https://code.visualstudio.com/docs/remote/troubleshooting#_reporting-issues -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
- VSCode Version: 1.66.2
- Local OS Version: Darwin arm64 21.4.0 (macOS Monterey 12.3.1)
- Remote OS Version: Arch Linux
- Remote Extension/Connection Type: SSH
Steps to Reproduce:
1. From a local MacOS VS Code installation: `F1` → `Remote SSH: Connect to Host...`
2. From the Remote SSH Linux environment: `F1` → `Terminal: Create New Integrated Terminal (Local)`
Result: `The terminal process failed to launch: Path to shell executable "/usr/bin/bash" does not exist.`
(Which, obviously, is true: `/usr/bin/bash/` doesn't exist in macOS: the default shell in macOS has been zsh for a few years or so; which happens to not mesh at all with my workflow, so my local `PATH` includes `/opt/homebrew/bin/bash` instead.)
Anyway, [just looking at the source code for `CreateNewLocalTerminalAction`](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminal/electron-sandbox/terminalRemote.ts#L53) it seems clear that differences between executable locations between remote and local hosts currently isn't taken into account at all is, is it?
Not passing a `shellPath` argument to `const instance = await this._terminalService.createTerminal({ cwd });` looks dubious to me. I suspect that causes the default shell path to be derived from the remote environment rather than the local one, or am I wrong about that? | 1.0 | "Create New Integrated Terminal (Local)" does not apply the correct path to the shell executable - <!-- Please search existing issues to avoid creating duplicates, and review our troubleshooting tips: https://code.visualstudio.com/docs/remote/troubleshooting -->
<!-- Please attach logs to help us diagnose your issue. Learn more here: https://code.visualstudio.com/docs/remote/troubleshooting#_reporting-issues -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
- VSCode Version: 1.66.2
- Local OS Version: Darwin arm64 21.4.0 (macOS Monterey 12.3.1)
- Remote OS Version: Arch Linux
- Remote Extension/Connection Type: SSH
Steps to Reproduce:
1. From a local MacOS VS Code installation: `F1` → `Remote SSH: Connect to Host...`
2. From the Remote SSH Linux environment: `F1` → `Terminal: Create New Integrated Terminal (Local)`
Result: `The terminal process failed to launch: Path to shell executable "/usr/bin/bash" does not exist.`
(Which, obviously, is true: `/usr/bin/bash/` doesn't exist in macOS: the default shell in macOS has been zsh for a few years or so; which happens to not mesh at all with my workflow, so my local `PATH` includes `/opt/homebrew/bin/bash` instead.)
Anyway, [just looking at the source code for `CreateNewLocalTerminalAction`](https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminal/electron-sandbox/terminalRemote.ts#L53) it seems clear that differences between executable locations between remote and local hosts currently isn't taken into account at all is, is it?
Not passing a `shellPath` argument to `const instance = await this._terminalService.createTerminal({ cwd });` looks dubious to me. I suspect that causes the default shell path to be derived from the remote environment rather than the local one, or am I wrong about that? | process | create new integrated terminal local does not apply the correct path to the shell executable vscode version local os version darwin macos monterey remote os version arch linux remote extension connection type ssh steps to reproduce from a local macos vs code installation → remote ssh connect to host from the remote ssh linux environment → terminal create new integrated terminal local result the terminal process failed to launch path to shell executable usr bin bash does not exist which obviously is true usr bin bash doesn t exist in macos the default shell in macos has been zsh for a few years or so which happens to not mesh at all with my workflow so my local path includes opt homebrew bin bash instead anyway it seems clear that differences between executable locations between remote and local hosts currently isn t taken into account at all is is it not passing a shellpath argument to const instance await this terminalservice createterminal cwd looks dubious to me i suspect that causes the default shell path to be derived from the remote environment rather than the local one or am i wrong about that | 1 |
348,233 | 31,475,178,674 | IssuesEvent | 2023-08-30 10:11:05 | risingwavelabs/risingwave | https://api.github.com/repos/risingwavelabs/risingwave | opened | Integration test with cloud-hosed system | type/feature component/test | ### Is your feature request related to a problem? Please describe.
Right now we only have the integration test with upstream and downstream system deployed via docker compose. But we also support source and sink of cloud-hosted system, e.g. AWS MSK, AWS RDS, etc. But we haven't setup integration test environment for those cloud-hosted systems.
### Describe the solution you'd like
@neverchanje is investigating the solution right now, and we can use this thread to update the status.
I think there could be three options:
- I heard that cloud team has integration test with RisingWave kernel, maybe the cloud team can provision those cloud-hosted system in their pipeline and then we can run the integration test.
- Setup the pipeline environment in the kernel repo
- Use [LocalStack](https://github.com/localstack) to mimic the cloud environment, but I am not sure whether it can simulate an exact environment as the real cloud
Feel free to leave your comments. cc @lmatz @fuyufjh @hzxa21 @huangjw806
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 1.0 | Integration test with cloud-hosed system - ### Is your feature request related to a problem? Please describe.
Right now we only have the integration test with upstream and downstream system deployed via docker compose. But we also support source and sink of cloud-hosted system, e.g. AWS MSK, AWS RDS, etc. But we haven't setup integration test environment for those cloud-hosted systems.
### Describe the solution you'd like
@neverchanje is investigating the solution right now, and we can use this thread to update the status.
I think there could be three options:
- I heard that cloud team has integration test with RisingWave kernel, maybe the cloud team can provision those cloud-hosted system in their pipeline and then we can run the integration test.
- Setup the pipeline environment in the kernel repo
- Use [LocalStack](https://github.com/localstack) to mimic the cloud environment, but I am not sure whether it can simulate an exact environment as the real cloud
Feel free to leave your comments. cc @lmatz @fuyufjh @hzxa21 @huangjw806
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | non_process | integration test with cloud hosed system is your feature request related to a problem please describe right now we only have the integration test with upstream and downstream system deployed via docker compose but we also support source and sink of cloud hosted system e g aws msk aws rds etc but we haven t setup integration test environment for those cloud hosted systems describe the solution you d like neverchanje is investigating the solution right now and we can use this thread to update the status i think there could be three options i heard that cloud team has integration test with risingwave kernel maybe the cloud team can provision those cloud hosted system in their pipeline and then we can run the integration test setup the pipeline environment in the kernel repo use to mimic the cloud environment but i am not sure whether it can simulate an exact environment as the real cloud feel free to leave your comments cc lmatz fuyufjh describe alternatives you ve considered no response additional context no response | 0 |
85,215 | 24,543,682,410 | IssuesEvent | 2022-10-12 07:03:00 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [SB] Questionnaires / Active tasks > Screen is loading continuously when clicked on Questionnaires / Active tasks section | Bug Blocker P0 Study builder Process: Fixed Process: Tested dev | Questionnaires / Active tasks > Screen is loading continuously when clicked on Questionnaires / Active tasks section in the study builder

| 1.0 | [SB] Questionnaires / Active tasks > Screen is loading continuously when clicked on Questionnaires / Active tasks section - Questionnaires / Active tasks > Screen is loading continuously when clicked on Questionnaires / Active tasks section in the study builder

| non_process | questionnaires active tasks screen is loading continuously when clicked on questionnaires active tasks section questionnaires active tasks screen is loading continuously when clicked on questionnaires active tasks section in the study builder | 0 |
67,312 | 14,861,358,285 | IssuesEvent | 2021-01-18 22:38:56 | metnew-gr/dvna | https://api.github.com/repos/metnew-gr/dvna | opened | CVE-2020-7699 (High) detected in express-fileupload-0.4.0.tgz | security vulnerability | ## CVE-2020-7699 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>express-fileupload-0.4.0.tgz</b></p></summary>
<p>Simple express file upload middleware that wraps around Busboy</p>
<p>Library home page: <a href="https://registry.npmjs.org/express-fileupload/-/express-fileupload-0.4.0.tgz">https://registry.npmjs.org/express-fileupload/-/express-fileupload-0.4.0.tgz</a></p>
<p>Path to dependency file: dvna/package.json</p>
<p>Path to vulnerable library: dvna/node_modules/express-fileupload/package.json</p>
<p>
Dependency Hierarchy:
- :x: **express-fileupload-0.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/metnew-gr/dvna/commit/54829e5a5aa13989cfe679c0a3a6b0e5375f6148">54829e5a5aa13989cfe679c0a3a6b0e5375f6148</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package express-fileupload before 1.1.8. If the parseNested option is enabled, sending a corrupt HTTP request can lead to denial of service or arbitrary code execution.
<p>Publish Date: 2020-07-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7699>CVE-2020-7699</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/richardgirges/express-fileupload/issues/236">https://github.com/richardgirges/express-fileupload/issues/236</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 1.1.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7699 (High) detected in express-fileupload-0.4.0.tgz - ## CVE-2020-7699 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>express-fileupload-0.4.0.tgz</b></p></summary>
<p>Simple express file upload middleware that wraps around Busboy</p>
<p>Library home page: <a href="https://registry.npmjs.org/express-fileupload/-/express-fileupload-0.4.0.tgz">https://registry.npmjs.org/express-fileupload/-/express-fileupload-0.4.0.tgz</a></p>
<p>Path to dependency file: dvna/package.json</p>
<p>Path to vulnerable library: dvna/node_modules/express-fileupload/package.json</p>
<p>
Dependency Hierarchy:
- :x: **express-fileupload-0.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/metnew-gr/dvna/commit/54829e5a5aa13989cfe679c0a3a6b0e5375f6148">54829e5a5aa13989cfe679c0a3a6b0e5375f6148</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package express-fileupload before 1.1.8. If the parseNested option is enabled, sending a corrupt HTTP request can lead to denial of service or arbitrary code execution.
<p>Publish Date: 2020-07-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7699>CVE-2020-7699</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/richardgirges/express-fileupload/issues/236">https://github.com/richardgirges/express-fileupload/issues/236</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 1.1.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in express fileupload tgz cve high severity vulnerability vulnerable library express fileupload tgz simple express file upload middleware that wraps around busboy library home page a href path to dependency file dvna package json path to vulnerable library dvna node modules express fileupload package json dependency hierarchy x express fileupload tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package express fileupload before if the parsenested option is enabled sending a corrupt http request can lead to denial of service or arbitrary code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
8,384 | 11,544,898,871 | IssuesEvent | 2020-02-18 12:24:58 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | closed | extract_levels preprocessor only support 1D coordinates | preprocessor | Currently, `extract_levels` assume that levels are identical at every point and timestep. This is not true always, i.e. when using model levels. The preprocessor function must be enhanced to allow processing this kind of variables | 1.0 | extract_levels preprocessor only support 1D coordinates - Currently, `extract_levels` assume that levels are identical at every point and timestep. This is not true always, i.e. when using model levels. The preprocessor function must be enhanced to allow processing this kind of variables | process | extract levels preprocessor only support coordinates currently extract levels assume that levels are identical at every point and timestep this is not true always i e when using model levels the preprocessor function must be enhanced to allow processing this kind of variables | 1 |
2,531 | 5,289,878,727 | IssuesEvent | 2017-02-08 18:28:25 | MikePopoloski/slang | https://api.github.com/repos/MikePopoloski/slang | closed | Check whether keywords are actually eligible given current PP state | area-preprocessor easy | SystemVerilog has a preprocessor feature that enables and disables various sets of keywords. We should take into account that state and downgrade keyword tokens back into simple identifier tokens. | 1.0 | Check whether keywords are actually eligible given current PP state - SystemVerilog has a preprocessor feature that enables and disables various sets of keywords. We should take into account that state and downgrade keyword tokens back into simple identifier tokens. | process | check whether keywords are actually eligible given current pp state systemverilog has a preprocessor feature that enables and disables various sets of keywords we should take into account that state and downgrade keyword tokens back into simple identifier tokens | 1 |
186,988 | 6,744,123,483 | IssuesEvent | 2017-10-20 14:38:04 | MoveOnOrg/Spoke | https://api.github.com/repos/MoveOnOrg/Spoke | closed | Access data warehouse | priority: 2 project | - [ ] As an admin, I want to have direct access to my own internal data warehouse from the application without uploading a CSV to create texting lists
- [ ] As an admin, I want to sync information gathered from a campaign to the data warehouse. Currently, an admin is able to export data to an S3 bucket with proper credentials.
| 1.0 | Access data warehouse - - [ ] As an admin, I want to have direct access to my own internal data warehouse from the application without uploading a CSV to create texting lists
- [ ] As an admin, I want to sync information gathered from a campaign to the data warehouse. Currently, an admin is able to export data to an S3 bucket with proper credentials.
| non_process | access data warehouse as an admin i want to have direct access to my own internal data warehouse from the application without uploading a csv to create texting lists as an admin i want to sync information gathered from a campaign to the data warehouse currently an admin is able to export data to an bucket with proper credentials | 0 |
297 | 2,732,611,760 | IssuesEvent | 2015-04-17 07:55:40 | tomchristie/django-rest-framework | https://api.github.com/repos/tomchristie/django-rest-framework | opened | Standard response to usage questions. | Process | I think we should probably have a standard response to usage questions.
@jpadilla's wording here seems suitably brief and appropriate...
> The [discussion group](https://groups.google.com/forum/#!forum/django-rest-framework) is the best place to take this discussion and other usage questions. Thanks!
/cc @carltongibson @xordoquy @kevin-brown @jpadilla
| 1.0 | Standard response to usage questions. - I think we should probably have a standard response to usage questions.
@jpadilla's wording here seems suitably brief and appropriate...
> The [discussion group](https://groups.google.com/forum/#!forum/django-rest-framework) is the best place to take this discussion and other usage questions. Thanks!
/cc @carltongibson @xordoquy @kevin-brown @jpadilla
| process | standard response to usage questions i think we should probably have a standard response to usage questions jpadilla s wording here seems suitably brief and appropriate the is the best place to take this discussion and other usage questions thanks cc carltongibson xordoquy kevin brown jpadilla | 1 |
2,600 | 5,356,208,512 | IssuesEvent | 2017-02-20 15:05:45 | AllenFang/react-bootstrap-table | https://api.github.com/repos/AllenFang/react-bootstrap-table | closed | Uncaught TypeError: Cannot read property 'slice' of undefined at new BootstrapTable | enhancement inprocess | Im getting this error when passing my data in the prop "data={}" for the BootstrapTable.
My data structure is just an array of objects passed from an Apollo Provider. Anyone else have this issue?
`Uncaught TypeError: Cannot read property 'slice' of undefined
at new BootstrapTable` | 1.0 | Uncaught TypeError: Cannot read property 'slice' of undefined at new BootstrapTable - Im getting this error when passing my data in the prop "data={}" for the BootstrapTable.
My data structure is just an array of objects passed from an Apollo Provider. Anyone else have this issue?
`Uncaught TypeError: Cannot read property 'slice' of undefined
at new BootstrapTable` | process | uncaught typeerror cannot read property slice of undefined at new bootstraptable im getting this error when passing my data in the prop data for the bootstraptable my data structure is just an array of objects passed from an apollo provider anyone else have this issue uncaught typeerror cannot read property slice of undefined at new bootstraptable | 1 |
11,126 | 13,957,687,407 | IssuesEvent | 2020-10-24 08:09:14 | alexanderkotsev/geoportal | https://api.github.com/repos/alexanderkotsev/geoportal | opened | PT: registered Discovery service not reachable | Geoportal Harvesting process PT - Portugal | Dear Marta,
since yesterday we are currently experiencing difficulties to contact your registered endpoint.
https://catalogosnig.dgterritorio.gov.pt/geoportal/csw/discovery?service=CSW&request=GetCapabilities
At the moment, it returns "Página não encontrada" (HTTP 404).
Could you please investigate the issue and let us know?
For the time being, we have temporarily disabled the harvest.
Kind regards,
JRC INSPIRE Support team
Davide Artasensi | 1.0 | PT: registered Discovery service not reachable - Dear Marta,
since yesterday we are currently experiencing difficulties to contact your registered endpoint.
https://catalogosnig.dgterritorio.gov.pt/geoportal/csw/discovery?service=CSW&request=GetCapabilities
At the moment, it returns "Página não encontrada" (HTTP 404).
Could you please investigate the issue and let us know?
For the time being, we have temporarily disabled the harvest.
Kind regards,
JRC INSPIRE Support team
Davide Artasensi | process | pt registered discovery service not reachable dear marta since yesterday we are currently experiencing difficulties to contact your registered endpoint at the moment it returns quot p aacute gina n atilde o encontrada quot http could you please investigate the issue and let us know for the time being we have temporarily disabled the harvest kind regards jrc inspire support team davide artasensi | 1 |
189,744 | 14,520,598,490 | IssuesEvent | 2020-12-14 05:48:33 | kalexmills/github-vet-tests-dec2020 | https://api.github.com/repos/kalexmills/github-vet-tests-dec2020 | closed | mindreframer/golang-devops-stuff: src/github.com/hashicorp/terraform/helper/ssh/communicator_test.go; 12 LoC | fresh small test |
Found a possible issue in [mindreframer/golang-devops-stuff](https://www.github.com/mindreframer/golang-devops-stuff) at [src/github.com/hashicorp/terraform/helper/ssh/communicator_test.go](https://github.com/mindreframer/golang-devops-stuff/blob/bb6c6c16ff0ae7892cd0ed0715b8a6b769052f9f/src/github.com/hashicorp/terraform/helper/ssh/communicator_test.go#L78-L89)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable newChannel used in defer or goroutine at line 87
[Click here to see the code in its original context.](https://github.com/mindreframer/golang-devops-stuff/blob/bb6c6c16ff0ae7892cd0ed0715b8a6b769052f9f/src/github.com/hashicorp/terraform/helper/ssh/communicator_test.go#L78-L89)
<details>
<summary>Click here to show the 12 line(s) of Go which triggered the analyzer.</summary>
```go
for newChannel := range chans {
channel, _, err := newChannel.Accept()
if err != nil {
t.Errorf("Unable to accept channel.")
}
t.Log("Accepted channel")
go func() {
defer channel.Close()
conn.OpenChannel(newChannel.ChannelType(), nil)
}()
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: bb6c6c16ff0ae7892cd0ed0715b8a6b769052f9f
| 1.0 | mindreframer/golang-devops-stuff: src/github.com/hashicorp/terraform/helper/ssh/communicator_test.go; 12 LoC -
Found a possible issue in [mindreframer/golang-devops-stuff](https://www.github.com/mindreframer/golang-devops-stuff) at [src/github.com/hashicorp/terraform/helper/ssh/communicator_test.go](https://github.com/mindreframer/golang-devops-stuff/blob/bb6c6c16ff0ae7892cd0ed0715b8a6b769052f9f/src/github.com/hashicorp/terraform/helper/ssh/communicator_test.go#L78-L89)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable newChannel used in defer or goroutine at line 87
[Click here to see the code in its original context.](https://github.com/mindreframer/golang-devops-stuff/blob/bb6c6c16ff0ae7892cd0ed0715b8a6b769052f9f/src/github.com/hashicorp/terraform/helper/ssh/communicator_test.go#L78-L89)
<details>
<summary>Click here to show the 12 line(s) of Go which triggered the analyzer.</summary>
```go
for newChannel := range chans {
channel, _, err := newChannel.Accept()
if err != nil {
t.Errorf("Unable to accept channel.")
}
t.Log("Accepted channel")
go func() {
defer channel.Close()
conn.OpenChannel(newChannel.ChannelType(), nil)
}()
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: bb6c6c16ff0ae7892cd0ed0715b8a6b769052f9f
| non_process | mindreframer golang devops stuff src github com hashicorp terraform helper ssh communicator test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable newchannel used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for newchannel range chans channel err newchannel accept if err nil t errorf unable to accept channel t log accepted channel go func defer channel close conn openchannel newchannel channeltype nil leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
14,729 | 17,946,801,706 | IssuesEvent | 2021-09-12 00:19:21 | beecorrea/midas | https://api.github.com/repos/beecorrea/midas | closed | [PROCESS] Transfer money | processes | # Description
Draws money from an account and deposits it in another.
# Rules involved
- Does the account exist?
- Is the requester the owner of this account?
- Is this account open?
- Can this amount be drawn from this account?
# Steps
1. Draw money from sender/origin account.
2. Deposit money into receiver/destination account. | 1.0 | [PROCESS] Transfer money - # Description
Draws money from an account and deposits it in another.
# Rules involved
- Does the account exist?
- Is the requester the owner of this account?
- Is this account open?
- Can this amount be drawn from this account?
# Steps
1. Draw money from sender/origin account.
2. Deposit money into receiver/destination account. | process | transfer money description draws money from an account and deposits it in another rules involved does the account exist is the requester the owner of this account is this account open can this amount be drawn from this account steps draw money from sender origin account deposit money into receiver destination account | 1 |
22,306 | 30,859,915,637 | IssuesEvent | 2023-08-03 01:29:19 | emily-writes-poems/emily-writes-poems-processing | https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing | closed | react: collection page | processing | have these at least functioning from the react page:
- [x] display all collections list
- [x] create new collection
- technically supported by #4, but to add/edit poems in collection is still TBD (see #9, #5) | 1.0 | react: collection page - have these at least functioning from the react page:
- [x] display all collections list
- [x] create new collection
- technically supported by #4, but to add/edit poems in collection is still TBD (see #9, #5) | process | react collection page have these at least functioning from the react page display all collections list create new collection technically supported by but to add edit poems in collection is still tbd see | 1 |
439,380 | 12,681,938,932 | IssuesEvent | 2020-06-19 16:20:08 | JamieMason/syncpack | https://api.github.com/repos/JamieMason/syncpack | closed | format command sorts "files" property and breaks packages | Priority: High Status: To Do Type: Fix | ## Description
syncpack is reordering the contents of the "files" property which changes the contents of the tarball when using "npm publish" or "npm pack".
example package.json:
```
{
files: [
"dist/",
"!*.test.*",
"!__mocks__"
]
}
```
command:
`npx syncpack format`
result:
```
{
files: [
"!*.test.*",
"!__mocks__",
"dist/"
]
}
```
expected result:
```
{
files: [
"dist/",
"!*.test.*",
"!__mocks__"
]
}
```
The only work-around is to use .npmignore and remove the files property, but this becomes unmanageable.
## Suggested Solution
Do not sort the "files" property.
| 1.0 | format command sorts "files" property and breaks packages - ## Description
syncpack is reordering the contents of the "files" property which changes the contents of the tarball when using "npm publish" or "npm pack".
example package.json:
```
{
files: [
"dist/",
"!*.test.*",
"!__mocks__"
]
}
```
command:
`npx syncpack format`
result:
```
{
files: [
"!*.test.*",
"!__mocks__",
"dist/"
]
}
```
expected result:
```
{
files: [
"dist/",
"!*.test.*",
"!__mocks__"
]
}
```
The only work-around is to use .npmignore and remove the files property, but this becomes unmanageable.
## Suggested Solution
Do not sort the "files" property.
| non_process | format command sorts files property and breaks packages description syncpack is reordering the contents of the files property which changes the contents of the tarball when using npm publish or npm pack example package json files dist test mocks command npx syncpack format result files test mocks dist expected result files dist test mocks the only work around is to use npmignore and remove the files property but this becomes unmanageable suggested solution do not sort the files property | 0 |
17,635 | 23,456,844,566 | IssuesEvent | 2022-08-16 09:36:49 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Missing Networking info | automation/svc triaged cxp doc-bug process-automation/subsvc Pri2 | This is missing information about the Networking tab after Advanced.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9b4440e0-1ff5-0fd3-6983-d5f6ed86e818
* Version Independent ID: 8d6aecae-1a58-83aa-45f7-306fb6c92d38
* Content: [Create a standalone Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-create-standalone-account?tabs=azureportal)
* Content Source: [articles/automation/automation-create-standalone-account.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-create-standalone-account.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha** | 1.0 | Missing Networking info - This is missing information about the Networking tab after Advanced.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9b4440e0-1ff5-0fd3-6983-d5f6ed86e818
* Version Independent ID: 8d6aecae-1a58-83aa-45f7-306fb6c92d38
* Content: [Create a standalone Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-create-standalone-account?tabs=azureportal)
* Content Source: [articles/automation/automation-create-standalone-account.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-create-standalone-account.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha** | process | missing networking info this is missing information about the networking tab after advanced document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha | 1 |
19,972 | 26,451,953,070 | IssuesEvent | 2023-01-16 11:55:43 | firebase/firebase-cpp-sdk | https://api.github.com/repos/firebase/firebase-cpp-sdk | reopened | [C++] Nightly Integration Testing Report for Firestore | type: process nightly-testing |
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Sun Jan 15 04:03 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3922884794)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Sun Jan 15 05:50 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3923379154)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Mon Jan 16 03:45 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3929693264)**
| 1.0 | [C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Sun Jan 15 04:03 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3922884794)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Sun Jan 15 05:50 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3923379154)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 45f8e3268c2adbabca165ed0a937835f18930d2f
Last updated: Mon Jan 16 03:45 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3929693264)**
| process | nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated sun jan pst ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated sun jan pst ✅ nbsp integration test succeeded requested by on commit last updated mon jan pst | 1 |
19,078 | 25,119,367,037 | IssuesEvent | 2022-11-09 06:33:11 | streamnative/flink | https://api.github.com/repos/streamnative/flink | opened | [release] November Release, | compute/data-processing | In this release we need to release the 1.16 as well, need to setup the release pipeline and test the code. | 1.0 | [release] November Release, - In this release we need to release the 1.16 as well, need to setup the release pipeline and test the code. | process | november release in this release we need to release the as well need to setup the release pipeline and test the code | 1 |
515 | 2,989,806,920 | IssuesEvent | 2015-07-21 03:20:36 | codefordenver/org | https://api.github.com/repos/codefordenver/org | opened | Add Project Kickoff worksheet | Process Writing | As a product owner for a new project in partnership with a nonprofit (or other group/individual), I want a clear process / form to identify what tasks for the project should be. | 1.0 | Add Project Kickoff worksheet - As a product owner for a new project in partnership with a nonprofit (or other group/individual), I want a clear process / form to identify what tasks for the project should be. | process | add project kickoff worksheet as a product owner for a new project in partnership with a nonprofit or other group individual i want a clear process form to identify what tasks for the project should be | 1 |
18,111 | 24,143,701,518 | IssuesEvent | 2022-09-21 16:47:26 | googleapis/java-pubsublite-kafka | https://api.github.com/repos/googleapis/java-pubsublite-kafka | closed | Your .repo-metadata.json file has a problem 🤒 | type: process api: pubsublite repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'pubsublite-kafka' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'pubsublite-kafka' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname pubsublite kafka invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
11,021 | 13,806,951,302 | IssuesEvent | 2020-10-11 19:51:13 | km4ack/pi-build | https://api.github.com/repos/km4ack/pi-build | closed | Conky fails to start if dnsmasq.leases is not present | bug in process | per this [post](https://groups.io/g/KM4ACK-Pi/topic/77144861)
simple solution is to touch the dnsmasq.leases file after installing conky | 1.0 | Conky fails to start if dnsmasq.leases is not present - per this [post](https://groups.io/g/KM4ACK-Pi/topic/77144861)
simple solution is to touch the dnsmasq.leases file after installing conky | process | conky fails to start if dnsmasq leases is not present per this simple solution is to touch the dnsmasq leases file after installing conky | 1 |
11,217 | 13,998,587,993 | IssuesEvent | 2020-10-28 09:40:06 | prisma/quaint | https://api.github.com/repos/prisma/quaint | closed | Clean-up of our feature flags | kind/tech process/candidate | Our feature flags are a mess. We should clean them up and run more feature combinations on CI before merging anything. | 1.0 | Clean-up of our feature flags - Our feature flags are a mess. We should clean them up and run more feature combinations on CI before merging anything. | process | clean up of our feature flags our feature flags are a mess we should clean them up and run more feature combinations on ci before merging anything | 1 |
59,399 | 3,109,755,864 | IssuesEvent | 2015-09-02 00:02:29 | The-Stampede/Web-Page | https://api.github.com/repos/The-Stampede/Web-Page | opened | Site Load | difficulty: intermediate priority: important type: research value: performance | Determine that the site load is no longer than 3s average (May be shooting for too low) | 1.0 | Site Load - Determine that the site load is no longer than 3s average (May be shooting for too low) | non_process | site load determine that the site load is no longer than average may be shooting for too low | 0 |
15,749 | 27,820,863,966 | IssuesEvent | 2023-03-19 08:00:57 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | Issue/Discussion template defaults can lead to incorrect submissions | priority-3-medium type:refactor status:requirements | ### Describe the proposed change(s).
I notice sometimes people submitting invalid information, I think this is often caused by them simply leaving the default value, such as:
<img width="511" alt="image" src="https://user-images.githubusercontent.com/6311784/226161755-0636d4eb-4a4c-406f-a5b3-f3891c74b62a.png">
Sometimes we could fix this by switching defaults, but at other times we really want to make sure that users have *selected* something and not just left it as default. Do we need a "dummy" default such as "--Please select a value--" in such cases? | 1.0 | Issue/Discussion template defaults can lead to incorrect submissions - ### Describe the proposed change(s).
I notice sometimes people submitting invalid information, I think this is often caused by them simply leaving the default value, such as:
<img width="511" alt="image" src="https://user-images.githubusercontent.com/6311784/226161755-0636d4eb-4a4c-406f-a5b3-f3891c74b62a.png">
Sometimes we could fix this by switching defaults, but at other times we really want to make sure that users have *selected* something and not just left it as default. Do we need a "dummy" default such as "--Please select a value--" in such cases? | non_process | issue discussion template defaults can lead to incorrect submissions describe the proposed change s i notice sometimes people submitting invalid information i think this is often caused by them simply leaving the default value such as img width alt image src sometimes we could fix this by switching defaults but at other times we really want to make sure that users have selected something and not just left it as default do we need a dummy default such as please select a value in such cases | 0 |
17,958 | 23,962,476,855 | IssuesEvent | 2022-09-12 20:29:19 | openxla/stablehlo | https://api.github.com/repos/openxla/stablehlo | closed | Do not run buildAndTest GitHub Action if only markdown changes | Process | ### Request description
We should only run build and test if there is a source change, the definition of _source change_ can become more precise over time, but "not markdown" should resolve this issue for the majority of PRs that have come through (spec changes).
I noticed that #109 triggered a build of the StableHLO source code, when it was a simple markdown fix. | 1.0 | Do not run buildAndTest GitHub Action if only markdown changes - ### Request description
We should only run build and test if there is a source change, the definition of _source change_ can become more precise over time, but "not markdown" should resolve this issue for the majority of PRs that have come through (spec changes).
I noticed that #109 triggered a build of the StableHLO source code, when it was a simple markdown fix. | process | do not run buildandtest github action if only markdown changes request description we should only run build and test if there is a source change the definition of source change can become more precise over time but not markdown should resolve this issue for the majority of prs that have come through spec changes i noticed that triggered a build of the stablehlo source code when it was a simple markdown fix | 1 |
17,345 | 23,171,040,956 | IssuesEvent | 2022-07-30 18:14:20 | open-ephys/GUI | https://api.github.com/repos/open-ephys/GUI | closed | FileReaderThread isn't compatible with the Open Ephys data format | Processors | At the moment, the FileReaderThread reads data in an arbitrary format: consecutive int16s representing samples from 16 channels. It also uses pauses to approximate the original sample rate, which is clearly not a viable long-term solution.
We need to create a dedicated "FileReader" processor (rather than a DataThread within a SourceNode) that can read files saved in the Open Ephys format in such a way that sample rate is preserved.
| 1.0 | FileReaderThread isn't compatible with the Open Ephys data format - At the moment, the FileReaderThread reads data in an arbitrary format: consecutive int16s representing samples from 16 channels. It also uses pauses to approximate the original sample rate, which is clearly not a viable long-term solution.
We need to create a dedicated "FileReader" processor (rather than a DataThread within a SourceNode) that can read files saved in the Open Ephys format in such a way that sample rate is preserved.
| process | filereaderthread isn t compatible with the open ephys data format at the moment the filereaderthread reads data in an arbitrary format consecutive representing samples from channels it also uses pauses to approximate the original sample rate which is clearly not a viable long term solution we need to create a dedicated filereader processor rather than a datathread within a sourcenode that can read files saved in the open ephys format in such a way that sample rate is preserved | 1 |
19,443 | 25,713,957,124 | IssuesEvent | 2022-12-07 09:01:24 | googleapis/google-cloud-php | https://api.github.com/repos/googleapis/google-cloud-php | closed | Your .repo-metadata.json files have a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in AccessApproval/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessApproval/.repo-metadata.json
* api_shortname field missing from AccessApproval/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessContextManager/.repo-metadata.json
* api_shortname field missing from AccessContextManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AiPlatform/.repo-metadata.json
* release_level must be equal to one of the allowed values in AiPlatform/.repo-metadata.json
* api_shortname field missing from AiPlatform/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsAdmin/.repo-metadata.json
* api_shortname field missing from AnalyticsAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsData/.repo-metadata.json
* api_shortname field missing from AnalyticsData/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApiGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApiGateway/.repo-metadata.json
* api_shortname field missing from ApiGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApigeeConnect/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApigeeConnect/.repo-metadata.json
* api_shortname field missing from ApigeeConnect/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AppEngineAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AppEngineAdmin/.repo-metadata.json
* api_shortname field missing from AppEngineAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ArtifactRegistry/.repo-metadata.json
* release_level must be equal to one of the allowed values in ArtifactRegistry/.repo-metadata.json
* api_shortname field missing from ArtifactRegistry/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in Asset/.repo-metadata.json
* api_shortname field missing from Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in AssuredWorkloads/.repo-metadata.json
* api_shortname field missing from AssuredWorkloads/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in AutoMl/.repo-metadata.json
* api_shortname field missing from AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQuery/.repo-metadata.json
* api_shortname field missing from BigQuery/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryConnection/.repo-metadata.json
* api_shortname field missing from BigQueryConnection/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryReservation/.repo-metadata.json
* api_shortname field missing from BigQueryReservation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryStorage/.repo-metadata.json
* api_shortname field missing from BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Bigtable/.repo-metadata.json
* api_shortname field missing from Bigtable/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BillingBudgets/.repo-metadata.json
* release_level must be equal to one of the allowed values in BillingBudgets/.repo-metadata.json
* api_shortname field missing from BillingBudgets/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BinaryAuthorization/.repo-metadata.json
* release_level must be equal to one of the allowed values in BinaryAuthorization/.repo-metadata.json
* api_shortname field missing from BinaryAuthorization/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Build/.repo-metadata.json
* release_level must be equal to one of the allowed values in Build/.repo-metadata.json
* api_shortname field missing from Build/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in CertificateManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in CertificateManager/.repo-metadata.json
* api_shortname field missing from CertificateManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Channel/.repo-metadata.json
* release_level must be equal to one of the allowed values in Channel/.repo-metadata.json
* api_shortname field missing from Channel/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in Compute/.repo-metadata.json
* api_shortname field missing from Compute/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContactCenterInsights/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContactCenterInsights/.repo-metadata.json
* api_shortname field missing from ContactCenterInsights/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContainerAnalysis/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContainerAnalysis/.repo-metadata.json
* api_shortname field missing from ContainerAnalysis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Core/.repo-metadata.json
* release_level must be equal to one of the allowed values in Core/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataCatalog/.repo-metadata.json
* api_shortname field missing from DataCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataFusion/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataFusion/.repo-metadata.json
* api_shortname field missing from DataFusion/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataLabeling/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataLabeling/.repo-metadata.json
* api_shortname field missing from DataLabeling/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataflow/.repo-metadata.json
* api_shortname field missing from Dataflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataplex/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataplex/.repo-metadata.json
* api_shortname field missing from Dataplex/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataproc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataproc/.repo-metadata.json
* api_shortname field missing from Dataproc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataprocMetastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataprocMetastore/.repo-metadata.json
* api_shortname field missing from DataprocMetastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Datastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Datastore/.repo-metadata.json
* api_shortname field missing from Datastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DatastoreAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in DatastoreAdmin/.repo-metadata.json
* api_shortname field missing from DatastoreAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Datastream/.repo-metadata.json
* release_level must be equal to one of the allowed values in Datastream/.repo-metadata.json
* api_shortname field missing from Datastream/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Debugger/.repo-metadata.json
* release_level must be equal to one of the allowed values in Debugger/.repo-metadata.json
* api_shortname field missing from Debugger/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Deploy/.repo-metadata.json
* release_level must be equal to one of the allowed values in Deploy/.repo-metadata.json
* api_shortname field missing from Deploy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dialogflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dialogflow/.repo-metadata.json
* api_shortname field missing from Dialogflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dms/.repo-metadata.json
* api_shortname field missing from Dms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Domains/.repo-metadata.json
* release_level must be equal to one of the allowed values in Domains/.repo-metadata.json
* api_shortname field missing from Domains/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ErrorReporting/.repo-metadata.json
* release_level must be equal to one of the allowed values in ErrorReporting/.repo-metadata.json
* api_shortname field missing from ErrorReporting/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in EssentialContacts/.repo-metadata.json
* release_level must be equal to one of the allowed values in EssentialContacts/.repo-metadata.json
* api_shortname field missing from EssentialContacts/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Eventarc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Eventarc/.repo-metadata.json
* api_shortname field missing from Eventarc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in EventarcPublishing/.repo-metadata.json
* release_level must be equal to one of the allowed values in EventarcPublishing/.repo-metadata.json
* api_shortname field missing from EventarcPublishing/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Filestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Filestore/.repo-metadata.json
* api_shortname field missing from Filestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Firestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Firestore/.repo-metadata.json
* api_shortname field missing from Firestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Functions/.repo-metadata.json
* release_level must be equal to one of the allowed values in Functions/.repo-metadata.json
* api_shortname field missing from Functions/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Gaming/.repo-metadata.json
* release_level must be equal to one of the allowed values in Gaming/.repo-metadata.json
* api_shortname field missing from Gaming/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeConnectGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeConnectGateway/.repo-metadata.json
* api_shortname field missing from GkeConnectGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeHub/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeHub/.repo-metadata.json
* api_shortname field missing from GkeHub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Grafeas/.repo-metadata.json
* release_level must be equal to one of the allowed values in Grafeas/.repo-metadata.json
* api_shortname field missing from Grafeas/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in IamCredentials/.repo-metadata.json
* release_level must be equal to one of the allowed values in IamCredentials/.repo-metadata.json
* api_shortname field missing from IamCredentials/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iap/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iap/.repo-metadata.json
* api_shortname field missing from Iap/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Ids/.repo-metadata.json
* release_level must be equal to one of the allowed values in Ids/.repo-metadata.json
* api_shortname field missing from Ids/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iot/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iot/.repo-metadata.json
* api_shortname field missing from Iot/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Language/.repo-metadata.json
* release_level must be equal to one of the allowed values in Language/.repo-metadata.json
* api_shortname field missing from Language/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Logging/.repo-metadata.json
* release_level must be equal to one of the allowed values in Logging/.repo-metadata.json
* api_shortname field missing from Logging/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ManagedIdentities/.repo-metadata.json
* release_level must be equal to one of the allowed values in ManagedIdentities/.repo-metadata.json
* api_shortname field missing from ManagedIdentities/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in MediaTranslation/.repo-metadata.json
* release_level must be equal to one of the allowed values in MediaTranslation/.repo-metadata.json
* api_shortname field missing from MediaTranslation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Monitoring/.repo-metadata.json
* release_level must be equal to one of the allowed values in Monitoring/.repo-metadata.json
* api_shortname field missing from Monitoring/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkConnectivity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkConnectivity/.repo-metadata.json
* api_shortname field missing from NetworkConnectivity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkManagement/.repo-metadata.json
* api_shortname field missing from NetworkManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkSecurity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkSecurity/.repo-metadata.json
* api_shortname field missing from NetworkSecurity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Optimization/.repo-metadata.json
* release_level must be equal to one of the allowed values in Optimization/.repo-metadata.json
* api_shortname field missing from Optimization/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrchestrationAirflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrchestrationAirflow/.repo-metadata.json
* api_shortname field missing from OrchestrationAirflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrgPolicy/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrgPolicy/.repo-metadata.json
* api_shortname field missing from OrgPolicy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsConfig/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsConfig/.repo-metadata.json
* api_shortname field missing from OsConfig/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsLogin/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsLogin/.repo-metadata.json
* api_shortname field missing from OsLogin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PolicyTroubleshooter/.repo-metadata.json
* release_level must be equal to one of the allowed values in PolicyTroubleshooter/.repo-metadata.json
* api_shortname field missing from PolicyTroubleshooter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PrivateCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in PrivateCatalog/.repo-metadata.json
* api_shortname field missing from PrivateCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Profiler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Profiler/.repo-metadata.json
* api_shortname field missing from Profiler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PubSub/.repo-metadata.json
* release_level must be equal to one of the allowed values in PubSub/.repo-metadata.json
* api_shortname field missing from PubSub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecaptchaEnterprise/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecaptchaEnterprise/.repo-metadata.json
* api_shortname field missing from RecaptchaEnterprise/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecommendationEngine/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecommendationEngine/.repo-metadata.json
* api_shortname field missing from RecommendationEngine/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Recommender/.repo-metadata.json
* release_level must be equal to one of the allowed values in Recommender/.repo-metadata.json
* api_shortname field missing from Recommender/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Redis/.repo-metadata.json
* release_level must be equal to one of the allowed values in Redis/.repo-metadata.json
* api_shortname field missing from Redis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceManager/.repo-metadata.json
* api_shortname field missing from ResourceManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceSettings/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceSettings/.repo-metadata.json
* api_shortname field missing from ResourceSettings/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Retail/.repo-metadata.json
* release_level must be equal to one of the allowed values in Retail/.repo-metadata.json
* api_shortname field missing from Retail/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Scheduler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Scheduler/.repo-metadata.json
* api_shortname field missing from Scheduler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecretManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecretManager/.repo-metadata.json
* api_shortname field missing from SecretManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityPrivateCa/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityPrivateCa/.repo-metadata.json
* api_shortname field missing from SecurityPrivateCa/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceControl/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceControl/.repo-metadata.json
* api_shortname field missing from ServiceControl/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceDirectory/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceDirectory/.repo-metadata.json
* api_shortname field missing from ServiceDirectory/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceManagement/.repo-metadata.json
* api_shortname field missing from ServiceManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceUsage/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceUsage/.repo-metadata.json
* api_shortname field missing from ServiceUsage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Spanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in Spanner/.repo-metadata.json
* api_shortname field missing from Spanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Speech/.repo-metadata.json
* release_level must be equal to one of the allowed values in Speech/.repo-metadata.json
* api_shortname field missing from Speech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SqlAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in SqlAdmin/.repo-metadata.json
* api_shortname field missing from SqlAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Storage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Storage/.repo-metadata.json
* api_shortname field missing from Storage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in StorageTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in StorageTransfer/.repo-metadata.json
* api_shortname field missing from StorageTransfer/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Talent/.repo-metadata.json
* release_level must be equal to one of the allowed values in Talent/.repo-metadata.json
* api_shortname field missing from Talent/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tasks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tasks/.repo-metadata.json
* api_shortname field missing from Tasks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in TextToSpeech/.repo-metadata.json
* release_level must be equal to one of the allowed values in TextToSpeech/.repo-metadata.json
* api_shortname field missing from TextToSpeech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tpu/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tpu/.repo-metadata.json
* api_shortname field missing from Tpu/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Trace/.repo-metadata.json
* release_level must be equal to one of the allowed values in Trace/.repo-metadata.json
* api_shortname field missing from Trace/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Translate/.repo-metadata.json
* release_level must be equal to one of the allowed values in Translate/.repo-metadata.json
* api_shortname field missing from Translate/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoIntelligence/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoIntelligence/.repo-metadata.json
* api_shortname field missing from VideoIntelligence/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoLiveStream/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoLiveStream/.repo-metadata.json
* api_shortname field missing from VideoLiveStream/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoStitcher/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoStitcher/.repo-metadata.json
* api_shortname field missing from VideoStitcher/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VmMigration/.repo-metadata.json
* release_level must be equal to one of the allowed values in VmMigration/.repo-metadata.json
* api_shortname field missing from VmMigration/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VpcAccess/.repo-metadata.json
* release_level must be equal to one of the allowed values in VpcAccess/.repo-metadata.json
* api_shortname field missing from VpcAccess/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebRisk/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebRisk/.repo-metadata.json
* api_shortname field missing from WebRisk/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebSecurityScanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebSecurityScanner/.repo-metadata.json
* api_shortname field missing from WebSecurityScanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Workflows/.repo-metadata.json
* release_level must be equal to one of the allowed values in Workflows/.repo-metadata.json
* api_shortname field missing from Workflows/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in AccessApproval/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessApproval/.repo-metadata.json
* api_shortname field missing from AccessApproval/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AccessContextManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in AccessContextManager/.repo-metadata.json
* api_shortname field missing from AccessContextManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AiPlatform/.repo-metadata.json
* release_level must be equal to one of the allowed values in AiPlatform/.repo-metadata.json
* api_shortname field missing from AiPlatform/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsAdmin/.repo-metadata.json
* api_shortname field missing from AnalyticsAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AnalyticsData/.repo-metadata.json
* api_shortname field missing from AnalyticsData/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApiGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApiGateway/.repo-metadata.json
* api_shortname field missing from ApiGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ApigeeConnect/.repo-metadata.json
* release_level must be equal to one of the allowed values in ApigeeConnect/.repo-metadata.json
* api_shortname field missing from ApigeeConnect/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AppEngineAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in AppEngineAdmin/.repo-metadata.json
* api_shortname field missing from AppEngineAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ArtifactRegistry/.repo-metadata.json
* release_level must be equal to one of the allowed values in ArtifactRegistry/.repo-metadata.json
* api_shortname field missing from ArtifactRegistry/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in Asset/.repo-metadata.json
* api_shortname field missing from Asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in AssuredWorkloads/.repo-metadata.json
* api_shortname field missing from AssuredWorkloads/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in AutoMl/.repo-metadata.json
* api_shortname field missing from AutoMl/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQuery/.repo-metadata.json
* api_shortname field missing from BigQuery/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryConnection/.repo-metadata.json
* api_shortname field missing from BigQueryConnection/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryReservation/.repo-metadata.json
* api_shortname field missing from BigQueryReservation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in BigQueryStorage/.repo-metadata.json
* api_shortname field missing from BigQueryStorage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Bigtable/.repo-metadata.json
* api_shortname field missing from Bigtable/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BillingBudgets/.repo-metadata.json
* release_level must be equal to one of the allowed values in BillingBudgets/.repo-metadata.json
* api_shortname field missing from BillingBudgets/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in BinaryAuthorization/.repo-metadata.json
* release_level must be equal to one of the allowed values in BinaryAuthorization/.repo-metadata.json
* api_shortname field missing from BinaryAuthorization/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Build/.repo-metadata.json
* release_level must be equal to one of the allowed values in Build/.repo-metadata.json
* api_shortname field missing from Build/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in CertificateManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in CertificateManager/.repo-metadata.json
* api_shortname field missing from CertificateManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Channel/.repo-metadata.json
* release_level must be equal to one of the allowed values in Channel/.repo-metadata.json
* api_shortname field missing from Channel/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in CommonProtos/.repo-metadata.json
* release_level must be equal to one of the allowed values in Compute/.repo-metadata.json
* api_shortname field missing from Compute/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContactCenterInsights/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContactCenterInsights/.repo-metadata.json
* api_shortname field missing from ContactCenterInsights/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ContainerAnalysis/.repo-metadata.json
* release_level must be equal to one of the allowed values in ContainerAnalysis/.repo-metadata.json
* api_shortname field missing from ContainerAnalysis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Core/.repo-metadata.json
* release_level must be equal to one of the allowed values in Core/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataCatalog/.repo-metadata.json
* api_shortname field missing from DataCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataFusion/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataFusion/.repo-metadata.json
* api_shortname field missing from DataFusion/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataLabeling/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataLabeling/.repo-metadata.json
* api_shortname field missing from DataLabeling/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataflow/.repo-metadata.json
* api_shortname field missing from Dataflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataplex/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataplex/.repo-metadata.json
* api_shortname field missing from Dataplex/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dataproc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dataproc/.repo-metadata.json
* api_shortname field missing from Dataproc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DataprocMetastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in DataprocMetastore/.repo-metadata.json
* api_shortname field missing from DataprocMetastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Datastore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Datastore/.repo-metadata.json
* api_shortname field missing from Datastore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in DatastoreAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in DatastoreAdmin/.repo-metadata.json
* api_shortname field missing from DatastoreAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Datastream/.repo-metadata.json
* release_level must be equal to one of the allowed values in Datastream/.repo-metadata.json
* api_shortname field missing from Datastream/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Debugger/.repo-metadata.json
* release_level must be equal to one of the allowed values in Debugger/.repo-metadata.json
* api_shortname field missing from Debugger/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Deploy/.repo-metadata.json
* release_level must be equal to one of the allowed values in Deploy/.repo-metadata.json
* api_shortname field missing from Deploy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dialogflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dialogflow/.repo-metadata.json
* api_shortname field missing from Dialogflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Dms/.repo-metadata.json
* release_level must be equal to one of the allowed values in Dms/.repo-metadata.json
* api_shortname field missing from Dms/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Domains/.repo-metadata.json
* release_level must be equal to one of the allowed values in Domains/.repo-metadata.json
* api_shortname field missing from Domains/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ErrorReporting/.repo-metadata.json
* release_level must be equal to one of the allowed values in ErrorReporting/.repo-metadata.json
* api_shortname field missing from ErrorReporting/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in EssentialContacts/.repo-metadata.json
* release_level must be equal to one of the allowed values in EssentialContacts/.repo-metadata.json
* api_shortname field missing from EssentialContacts/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Eventarc/.repo-metadata.json
* release_level must be equal to one of the allowed values in Eventarc/.repo-metadata.json
* api_shortname field missing from Eventarc/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in EventarcPublishing/.repo-metadata.json
* release_level must be equal to one of the allowed values in EventarcPublishing/.repo-metadata.json
* api_shortname field missing from EventarcPublishing/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Filestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Filestore/.repo-metadata.json
* api_shortname field missing from Filestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Firestore/.repo-metadata.json
* release_level must be equal to one of the allowed values in Firestore/.repo-metadata.json
* api_shortname field missing from Firestore/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Functions/.repo-metadata.json
* release_level must be equal to one of the allowed values in Functions/.repo-metadata.json
* api_shortname field missing from Functions/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Gaming/.repo-metadata.json
* release_level must be equal to one of the allowed values in Gaming/.repo-metadata.json
* api_shortname field missing from Gaming/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeConnectGateway/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeConnectGateway/.repo-metadata.json
* api_shortname field missing from GkeConnectGateway/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in GkeHub/.repo-metadata.json
* release_level must be equal to one of the allowed values in GkeHub/.repo-metadata.json
* api_shortname field missing from GkeHub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Grafeas/.repo-metadata.json
* release_level must be equal to one of the allowed values in Grafeas/.repo-metadata.json
* api_shortname field missing from Grafeas/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in IamCredentials/.repo-metadata.json
* release_level must be equal to one of the allowed values in IamCredentials/.repo-metadata.json
* api_shortname field missing from IamCredentials/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iap/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iap/.repo-metadata.json
* api_shortname field missing from Iap/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Ids/.repo-metadata.json
* release_level must be equal to one of the allowed values in Ids/.repo-metadata.json
* api_shortname field missing from Ids/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Iot/.repo-metadata.json
* release_level must be equal to one of the allowed values in Iot/.repo-metadata.json
* api_shortname field missing from Iot/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Language/.repo-metadata.json
* release_level must be equal to one of the allowed values in Language/.repo-metadata.json
* api_shortname field missing from Language/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Logging/.repo-metadata.json
* release_level must be equal to one of the allowed values in Logging/.repo-metadata.json
* api_shortname field missing from Logging/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ManagedIdentities/.repo-metadata.json
* release_level must be equal to one of the allowed values in ManagedIdentities/.repo-metadata.json
* api_shortname field missing from ManagedIdentities/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in MediaTranslation/.repo-metadata.json
* release_level must be equal to one of the allowed values in MediaTranslation/.repo-metadata.json
* api_shortname field missing from MediaTranslation/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Monitoring/.repo-metadata.json
* release_level must be equal to one of the allowed values in Monitoring/.repo-metadata.json
* api_shortname field missing from Monitoring/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkConnectivity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkConnectivity/.repo-metadata.json
* api_shortname field missing from NetworkConnectivity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkManagement/.repo-metadata.json
* api_shortname field missing from NetworkManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in NetworkSecurity/.repo-metadata.json
* release_level must be equal to one of the allowed values in NetworkSecurity/.repo-metadata.json
* api_shortname field missing from NetworkSecurity/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Optimization/.repo-metadata.json
* release_level must be equal to one of the allowed values in Optimization/.repo-metadata.json
* api_shortname field missing from Optimization/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrchestrationAirflow/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrchestrationAirflow/.repo-metadata.json
* api_shortname field missing from OrchestrationAirflow/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OrgPolicy/.repo-metadata.json
* release_level must be equal to one of the allowed values in OrgPolicy/.repo-metadata.json
* api_shortname field missing from OrgPolicy/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsConfig/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsConfig/.repo-metadata.json
* api_shortname field missing from OsConfig/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in OsLogin/.repo-metadata.json
* release_level must be equal to one of the allowed values in OsLogin/.repo-metadata.json
* api_shortname field missing from OsLogin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PolicyTroubleshooter/.repo-metadata.json
* release_level must be equal to one of the allowed values in PolicyTroubleshooter/.repo-metadata.json
* api_shortname field missing from PolicyTroubleshooter/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PrivateCatalog/.repo-metadata.json
* release_level must be equal to one of the allowed values in PrivateCatalog/.repo-metadata.json
* api_shortname field missing from PrivateCatalog/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Profiler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Profiler/.repo-metadata.json
* api_shortname field missing from Profiler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in PubSub/.repo-metadata.json
* release_level must be equal to one of the allowed values in PubSub/.repo-metadata.json
* api_shortname field missing from PubSub/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecaptchaEnterprise/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecaptchaEnterprise/.repo-metadata.json
* api_shortname field missing from RecaptchaEnterprise/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in RecommendationEngine/.repo-metadata.json
* release_level must be equal to one of the allowed values in RecommendationEngine/.repo-metadata.json
* api_shortname field missing from RecommendationEngine/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Recommender/.repo-metadata.json
* release_level must be equal to one of the allowed values in Recommender/.repo-metadata.json
* api_shortname field missing from Recommender/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Redis/.repo-metadata.json
* release_level must be equal to one of the allowed values in Redis/.repo-metadata.json
* api_shortname field missing from Redis/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceManager/.repo-metadata.json
* api_shortname field missing from ResourceManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ResourceSettings/.repo-metadata.json
* release_level must be equal to one of the allowed values in ResourceSettings/.repo-metadata.json
* api_shortname field missing from ResourceSettings/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Retail/.repo-metadata.json
* release_level must be equal to one of the allowed values in Retail/.repo-metadata.json
* api_shortname field missing from Retail/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Scheduler/.repo-metadata.json
* release_level must be equal to one of the allowed values in Scheduler/.repo-metadata.json
* api_shortname field missing from Scheduler/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecretManager/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecretManager/.repo-metadata.json
* api_shortname field missing from SecretManager/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SecurityPrivateCa/.repo-metadata.json
* release_level must be equal to one of the allowed values in SecurityPrivateCa/.repo-metadata.json
* api_shortname field missing from SecurityPrivateCa/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceControl/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceControl/.repo-metadata.json
* api_shortname field missing from ServiceControl/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceDirectory/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceDirectory/.repo-metadata.json
* api_shortname field missing from ServiceDirectory/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceManagement/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceManagement/.repo-metadata.json
* api_shortname field missing from ServiceManagement/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in ServiceUsage/.repo-metadata.json
* release_level must be equal to one of the allowed values in ServiceUsage/.repo-metadata.json
* api_shortname field missing from ServiceUsage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Spanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in Spanner/.repo-metadata.json
* api_shortname field missing from Spanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Speech/.repo-metadata.json
* release_level must be equal to one of the allowed values in Speech/.repo-metadata.json
* api_shortname field missing from Speech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in SqlAdmin/.repo-metadata.json
* release_level must be equal to one of the allowed values in SqlAdmin/.repo-metadata.json
* api_shortname field missing from SqlAdmin/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Storage/.repo-metadata.json
* release_level must be equal to one of the allowed values in Storage/.repo-metadata.json
* api_shortname field missing from Storage/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in StorageTransfer/.repo-metadata.json
* release_level must be equal to one of the allowed values in StorageTransfer/.repo-metadata.json
* api_shortname field missing from StorageTransfer/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Talent/.repo-metadata.json
* release_level must be equal to one of the allowed values in Talent/.repo-metadata.json
* api_shortname field missing from Talent/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tasks/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tasks/.repo-metadata.json
* api_shortname field missing from Tasks/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in TextToSpeech/.repo-metadata.json
* release_level must be equal to one of the allowed values in TextToSpeech/.repo-metadata.json
* api_shortname field missing from TextToSpeech/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Tpu/.repo-metadata.json
* release_level must be equal to one of the allowed values in Tpu/.repo-metadata.json
* api_shortname field missing from Tpu/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Trace/.repo-metadata.json
* release_level must be equal to one of the allowed values in Trace/.repo-metadata.json
* api_shortname field missing from Trace/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Translate/.repo-metadata.json
* release_level must be equal to one of the allowed values in Translate/.repo-metadata.json
* api_shortname field missing from Translate/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoIntelligence/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoIntelligence/.repo-metadata.json
* api_shortname field missing from VideoIntelligence/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoLiveStream/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoLiveStream/.repo-metadata.json
* api_shortname field missing from VideoLiveStream/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VideoStitcher/.repo-metadata.json
* release_level must be equal to one of the allowed values in VideoStitcher/.repo-metadata.json
* api_shortname field missing from VideoStitcher/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VmMigration/.repo-metadata.json
* release_level must be equal to one of the allowed values in VmMigration/.repo-metadata.json
* api_shortname field missing from VmMigration/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in VpcAccess/.repo-metadata.json
* release_level must be equal to one of the allowed values in VpcAccess/.repo-metadata.json
* api_shortname field missing from VpcAccess/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebRisk/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebRisk/.repo-metadata.json
* api_shortname field missing from WebRisk/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in WebSecurityScanner/.repo-metadata.json
* release_level must be equal to one of the allowed values in WebSecurityScanner/.repo-metadata.json
* api_shortname field missing from WebSecurityScanner/.repo-metadata.json
* client_documentation must match pattern "^https://.*" in Workflows/.repo-metadata.json
* release_level must be equal to one of the allowed values in Workflows/.repo-metadata.json
* api_shortname field missing from Workflows/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 client documentation must match pattern in accessapproval repo metadata json release level must be equal to one of the allowed values in accessapproval repo metadata json api shortname field missing from accessapproval repo metadata json client documentation must match pattern in accesscontextmanager repo metadata json release level must be equal to one of the allowed values in accesscontextmanager repo metadata json api shortname field missing from accesscontextmanager repo metadata json client documentation must match pattern in aiplatform repo metadata json release level must be equal to one of the allowed values in aiplatform repo metadata json api shortname field missing from aiplatform repo metadata json release level must be equal to one of the allowed values in analyticsadmin repo metadata json api shortname field missing from analyticsadmin repo metadata json release level must be equal to one of the allowed values in analyticsdata repo metadata json api shortname field missing from analyticsdata repo metadata json client documentation must match pattern in apigateway repo metadata json release level must be equal to one of the allowed values in apigateway repo metadata json api shortname field missing from apigateway repo metadata json client documentation must match pattern in apigeeconnect repo metadata json release level must be equal to one of the allowed values in apigeeconnect repo metadata json api shortname field missing from apigeeconnect repo metadata json client documentation must match pattern in appengineadmin repo metadata json release level must be equal to one of the allowed values in appengineadmin repo metadata json api shortname field missing from appengineadmin repo metadata json client documentation must match pattern in artifactregistry repo metadata json release level must be equal to one of the allowed values in artifactregistry repo metadata json api shortname field missing from artifactregistry repo metadata json client documentation must match pattern in asset repo metadata json release level must be equal to one of the allowed values in asset repo metadata json api shortname field missing from asset repo metadata json release level must be equal to one of the allowed values in assuredworkloads repo metadata json api shortname field missing from assuredworkloads repo metadata json client documentation must match pattern in automl repo metadata json release level must be equal to one of the allowed values in automl repo metadata json api shortname field missing from automl repo metadata json release level must be equal to one of the allowed values in bigquery repo metadata json api shortname field missing from bigquery repo metadata json release level must be equal to one of the allowed values in bigqueryconnection repo metadata json api shortname field missing from bigqueryconnection repo metadata json release level must be equal to one of the allowed values in bigqueryreservation repo metadata json api shortname field missing from bigqueryreservation repo metadata json client documentation must match pattern in bigquerystorage repo metadata json release level must be equal to one of the allowed values in bigquerystorage repo metadata json api shortname field missing from bigquerystorage repo metadata json release level must be equal to one of the allowed values in bigtable repo metadata json api shortname field missing from bigtable repo metadata json client documentation must match pattern in billingbudgets repo metadata json release level must be equal to one of the allowed values in billingbudgets repo metadata json api shortname field missing from billingbudgets repo metadata json client documentation must match pattern in binaryauthorization repo metadata json release level must be equal to one of the allowed values in binaryauthorization repo metadata json api shortname field missing from binaryauthorization repo metadata json client documentation must match pattern in build repo metadata json release level must be equal to one of the allowed values in build repo metadata json api shortname field missing from build repo metadata json client documentation must match pattern in certificatemanager repo metadata json release level must be equal to one of the allowed values in certificatemanager repo metadata json api shortname field missing from certificatemanager repo metadata json client documentation must match pattern in channel repo metadata json release level must be equal to one of the allowed values in channel repo metadata json api shortname field missing from channel repo metadata json client documentation must match pattern in commonprotos repo metadata json release level must be equal to one of the allowed values in commonprotos repo metadata json release level must be equal to one of the allowed values in compute repo metadata json api shortname field missing from compute repo metadata json client documentation must match pattern in contactcenterinsights repo metadata json release level must be equal to one of the allowed values in contactcenterinsights repo metadata json api shortname field missing from contactcenterinsights repo metadata json client documentation must match pattern in containeranalysis repo metadata json release level must be equal to one of the allowed values in containeranalysis repo metadata json api shortname field missing from containeranalysis repo metadata json client documentation must match pattern in core repo metadata json release level must be equal to one of the allowed values in core repo metadata json client documentation must match pattern in datacatalog repo metadata json release level must be equal to one of the allowed values in datacatalog repo metadata json api shortname field missing from datacatalog repo metadata json client documentation must match pattern in datafusion repo metadata json release level must be equal to one of the allowed values in datafusion repo metadata json api shortname field missing from datafusion repo metadata json client documentation must match pattern in datalabeling repo metadata json release level must be equal to one of the allowed values in datalabeling repo metadata json api shortname field missing from datalabeling repo metadata json client documentation must match pattern in dataflow repo metadata json release level must be equal to one of the allowed values in dataflow repo metadata json api shortname field missing from dataflow repo metadata json client documentation must match pattern in dataplex repo metadata json release level must be equal to one of the allowed values in dataplex repo metadata json api shortname field missing from dataplex repo metadata json client documentation must match pattern in dataproc repo metadata json release level must be equal to one of the allowed values in dataproc repo metadata json api shortname field missing from dataproc repo metadata json client documentation must match pattern in dataprocmetastore repo metadata json release level must be equal to one of the allowed values in dataprocmetastore repo metadata json api shortname field missing from dataprocmetastore repo metadata json client documentation must match pattern in datastore repo metadata json release level must be equal to one of the allowed values in datastore repo metadata json api shortname field missing from datastore repo metadata json client documentation must match pattern in datastoreadmin repo metadata json release level must be equal to one of the allowed values in datastoreadmin repo metadata json api shortname field missing from datastoreadmin repo metadata json client documentation must match pattern in datastream repo metadata json release level must be equal to one of the allowed values in datastream repo metadata json api shortname field missing from datastream repo metadata json client documentation must match pattern in debugger repo metadata json release level must be equal to one of the allowed values in debugger repo metadata json api shortname field missing from debugger repo metadata json client documentation must match pattern in deploy repo metadata json release level must be equal to one of the allowed values in deploy repo metadata json api shortname field missing from deploy repo metadata json client documentation must match pattern in dialogflow repo metadata json release level must be equal to one of the allowed values in dialogflow repo metadata json api shortname field missing from dialogflow repo metadata json client documentation must match pattern in dms repo metadata json release level must be equal to one of the allowed values in dms repo metadata json api shortname field missing from dms repo metadata json client documentation must match pattern in domains repo metadata json release level must be equal to one of the allowed values in domains repo metadata json api shortname field missing from domains repo metadata json client documentation must match pattern in errorreporting repo metadata json release level must be equal to one of the allowed values in errorreporting repo metadata json api shortname field missing from errorreporting repo metadata json client documentation must match pattern in essentialcontacts repo metadata json release level must be equal to one of the allowed values in essentialcontacts repo metadata json api shortname field missing from essentialcontacts repo metadata json client documentation must match pattern in eventarc repo metadata json release level must be equal to one of the allowed values in eventarc repo metadata json api shortname field missing from eventarc repo metadata json client documentation must match pattern in eventarcpublishing repo metadata json release level must be equal to one of the allowed values in eventarcpublishing repo metadata json api shortname field missing from eventarcpublishing repo metadata json client documentation must match pattern in filestore repo metadata json release level must be equal to one of the allowed values in filestore repo metadata json api shortname field missing from filestore repo metadata json client documentation must match pattern in firestore repo metadata json release level must be equal to one of the allowed values in firestore repo metadata json api shortname field missing from firestore repo metadata json client documentation must match pattern in functions repo metadata json release level must be equal to one of the allowed values in functions repo metadata json api shortname field missing from functions repo metadata json client documentation must match pattern in gaming repo metadata json release level must be equal to one of the allowed values in gaming repo metadata json api shortname field missing from gaming repo metadata json client documentation must match pattern in gkeconnectgateway repo metadata json release level must be equal to one of the allowed values in gkeconnectgateway repo metadata json api shortname field missing from gkeconnectgateway repo metadata json client documentation must match pattern in gkehub repo metadata json release level must be equal to one of the allowed values in gkehub repo metadata json api shortname field missing from gkehub repo metadata json client documentation must match pattern in grafeas repo metadata json release level must be equal to one of the allowed values in grafeas repo metadata json api shortname field missing from grafeas repo metadata json client documentation must match pattern in iamcredentials repo metadata json release level must be equal to one of the allowed values in iamcredentials repo metadata json api shortname field missing from iamcredentials repo metadata json client documentation must match pattern in iap repo metadata json release level must be equal to one of the allowed values in iap repo metadata json api shortname field missing from iap repo metadata json client documentation must match pattern in ids repo metadata json release level must be equal to one of the allowed values in ids repo metadata json api shortname field missing from ids repo metadata json client documentation must match pattern in iot repo metadata json release level must be equal to one of the allowed values in iot repo metadata json api shortname field missing from iot repo metadata json client documentation must match pattern in language repo metadata json release level must be equal to one of the allowed values in language repo metadata json api shortname field missing from language repo metadata json client documentation must match pattern in logging repo metadata json release level must be equal to one of the allowed values in logging repo metadata json api shortname field missing from logging repo metadata json client documentation must match pattern in managedidentities repo metadata json release level must be equal to one of the allowed values in managedidentities repo metadata json api shortname field missing from managedidentities repo metadata json client documentation must match pattern in mediatranslation repo metadata json release level must be equal to one of the allowed values in mediatranslation repo metadata json api shortname field missing from mediatranslation repo metadata json client documentation must match pattern in monitoring repo metadata json release level must be equal to one of the allowed values in monitoring repo metadata json api shortname field missing from monitoring repo metadata json client documentation must match pattern in networkconnectivity repo metadata json release level must be equal to one of the allowed values in networkconnectivity repo metadata json api shortname field missing from networkconnectivity repo metadata json client documentation must match pattern in networkmanagement repo metadata json release level must be equal to one of the allowed values in networkmanagement repo metadata json api shortname field missing from networkmanagement repo metadata json client documentation must match pattern in networksecurity repo metadata json release level must be equal to one of the allowed values in networksecurity repo metadata json api shortname field missing from networksecurity repo metadata json client documentation must match pattern in optimization repo metadata json release level must be equal to one of the allowed values in optimization repo metadata json api shortname field missing from optimization repo metadata json client documentation must match pattern in orchestrationairflow repo metadata json release level must be equal to one of the allowed values in orchestrationairflow repo metadata json api shortname field missing from orchestrationairflow repo metadata json client documentation must match pattern in orgpolicy repo metadata json release level must be equal to one of the allowed values in orgpolicy repo metadata json api shortname field missing from orgpolicy repo metadata json client documentation must match pattern in osconfig repo metadata json release level must be equal to one of the allowed values in osconfig repo metadata json api shortname field missing from osconfig repo metadata json client documentation must match pattern in oslogin repo metadata json release level must be equal to one of the allowed values in oslogin repo metadata json api shortname field missing from oslogin repo metadata json client documentation must match pattern in policytroubleshooter repo metadata json release level must be equal to one of the allowed values in policytroubleshooter repo metadata json api shortname field missing from policytroubleshooter repo metadata json client documentation must match pattern in privatecatalog repo metadata json release level must be equal to one of the allowed values in privatecatalog repo metadata json api shortname field missing from privatecatalog repo metadata json client documentation must match pattern in profiler repo metadata json release level must be equal to one of the allowed values in profiler repo metadata json api shortname field missing from profiler repo metadata json client documentation must match pattern in pubsub repo metadata json release level must be equal to one of the allowed values in pubsub repo metadata json api shortname field missing from pubsub repo metadata json client documentation must match pattern in recaptchaenterprise repo metadata json release level must be equal to one of the allowed values in recaptchaenterprise repo metadata json api shortname field missing from recaptchaenterprise repo metadata json client documentation must match pattern in recommendationengine repo metadata json release level must be equal to one of the allowed values in recommendationengine repo metadata json api shortname field missing from recommendationengine repo metadata json client documentation must match pattern in recommender repo metadata json release level must be equal to one of the allowed values in recommender repo metadata json api shortname field missing from recommender repo metadata json client documentation must match pattern in redis repo metadata json release level must be equal to one of the allowed values in redis repo metadata json api shortname field missing from redis repo metadata json client documentation must match pattern in resourcemanager repo metadata json release level must be equal to one of the allowed values in resourcemanager repo metadata json api shortname field missing from resourcemanager repo metadata json client documentation must match pattern in resourcesettings repo metadata json release level must be equal to one of the allowed values in resourcesettings repo metadata json api shortname field missing from resourcesettings repo metadata json client documentation must match pattern in retail repo metadata json release level must be equal to one of the allowed values in retail repo metadata json api shortname field missing from retail repo metadata json client documentation must match pattern in scheduler repo metadata json release level must be equal to one of the allowed values in scheduler repo metadata json api shortname field missing from scheduler repo metadata json client documentation must match pattern in secretmanager repo metadata json release level must be equal to one of the allowed values in secretmanager repo metadata json api shortname field missing from secretmanager repo metadata json client documentation must match pattern in securityprivateca repo metadata json release level must be equal to one of the allowed values in securityprivateca repo metadata json api shortname field missing from securityprivateca repo metadata json client documentation must match pattern in servicecontrol repo metadata json release level must be equal to one of the allowed values in servicecontrol repo metadata json api shortname field missing from servicecontrol repo metadata json client documentation must match pattern in servicedirectory repo metadata json release level must be equal to one of the allowed values in servicedirectory repo metadata json api shortname field missing from servicedirectory repo metadata json client documentation must match pattern in servicemanagement repo metadata json release level must be equal to one of the allowed values in servicemanagement repo metadata json api shortname field missing from servicemanagement repo metadata json client documentation must match pattern in serviceusage repo metadata json release level must be equal to one of the allowed values in serviceusage repo metadata json api shortname field missing from serviceusage repo metadata json client documentation must match pattern in spanner repo metadata json release level must be equal to one of the allowed values in spanner repo metadata json api shortname field missing from spanner repo metadata json client documentation must match pattern in speech repo metadata json release level must be equal to one of the allowed values in speech repo metadata json api shortname field missing from speech repo metadata json client documentation must match pattern in sqladmin repo metadata json release level must be equal to one of the allowed values in sqladmin repo metadata json api shortname field missing from sqladmin repo metadata json client documentation must match pattern in storage repo metadata json release level must be equal to one of the allowed values in storage repo metadata json api shortname field missing from storage repo metadata json client documentation must match pattern in storagetransfer repo metadata json release level must be equal to one of the allowed values in storagetransfer repo metadata json api shortname field missing from storagetransfer repo metadata json client documentation must match pattern in talent repo metadata json release level must be equal to one of the allowed values in talent repo metadata json api shortname field missing from talent repo metadata json client documentation must match pattern in tasks repo metadata json release level must be equal to one of the allowed values in tasks repo metadata json api shortname field missing from tasks repo metadata json client documentation must match pattern in texttospeech repo metadata json release level must be equal to one of the allowed values in texttospeech repo metadata json api shortname field missing from texttospeech repo metadata json client documentation must match pattern in tpu repo metadata json release level must be equal to one of the allowed values in tpu repo metadata json api shortname field missing from tpu repo metadata json client documentation must match pattern in trace repo metadata json release level must be equal to one of the allowed values in trace repo metadata json api shortname field missing from trace repo metadata json client documentation must match pattern in translate repo metadata json release level must be equal to one of the allowed values in translate repo metadata json api shortname field missing from translate repo metadata json client documentation must match pattern in videointelligence repo metadata json release level must be equal to one of the allowed values in videointelligence repo metadata json api shortname field missing from videointelligence repo metadata json client documentation must match pattern in videolivestream repo metadata json release level must be equal to one of the allowed values in videolivestream repo metadata json api shortname field missing from videolivestream repo metadata json client documentation must match pattern in videostitcher repo metadata json release level must be equal to one of the allowed values in videostitcher repo metadata json api shortname field missing from videostitcher repo metadata json client documentation must match pattern in vmmigration repo metadata json release level must be equal to one of the allowed values in vmmigration repo metadata json api shortname field missing from vmmigration repo metadata json client documentation must match pattern in vpcaccess repo metadata json release level must be equal to one of the allowed values in vpcaccess repo metadata json api shortname field missing from vpcaccess repo metadata json client documentation must match pattern in webrisk repo metadata json release level must be equal to one of the allowed values in webrisk repo metadata json api shortname field missing from webrisk repo metadata json client documentation must match pattern in websecurityscanner repo metadata json release level must be equal to one of the allowed values in websecurityscanner repo metadata json api shortname field missing from websecurityscanner repo metadata json client documentation must match pattern in workflows repo metadata json release level must be equal to one of the allowed values in workflows repo metadata json api shortname field missing from workflows repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
21,818 | 30,316,653,495 | IssuesEvent | 2023-07-10 16:00:57 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | closed | Change term - coordinateUncertaintyInMeters | Term - change Class - Location non-normative Process - complete | ## Term change
* Submitter: John Wieczorek
* Efficacy Justification (why is this change necessary?): Outright error, the date Selective Availability was turned off is wrong.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): Truth and Justice!
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: None
Current Term definition: https://dwc.tdwg.org/terms/#dwc:coordinateUncertaintyInMeters
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): coordinateUncertaintyInMeters
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Location
* Definition of the term (normative): "The horizontal distance (in meters) from the given decimalLatitude and decimalLongitude describing the smallest circle containing the whole of the Location. Leave the value empty if the uncertainty is unknown, cannot be estimated, or is not applicable (because there are no coordinates). Zero is not a valid value for this term."
* Usage comments (recommendations regarding content, etc., not normative): ""
* Examples (not normative): "`30` (reasonable lower limit on or after 20~2~**0**0-05-01 of a GPS reading under good conditions if the actual precision was not recorded at the time). 100 (reasonable lower limit before 20~2~**0**0-05-01 of a GPS reading under good conditions if the actual precision was not recorded at the time). 71 (uncertainty for a UTM coordinate having 100 meter precision and a known spatial reference system)."
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): http://rs.tdwg.org/dwc/terms/version/coordinateUncertaintyInMeters-2021-07-15
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/Gathering/SiteCoordinateSets/SiteCoordinates/CoordinatesLatLon/CoordinateErrorDistanceInMeters
| 1.0 | Change term - coordinateUncertaintyInMeters - ## Term change
* Submitter: John Wieczorek
* Efficacy Justification (why is this change necessary?): Outright error, the date Selective Availability was turned off is wrong.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): Truth and Justice!
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: None
Current Term definition: https://dwc.tdwg.org/terms/#dwc:coordinateUncertaintyInMeters
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): coordinateUncertaintyInMeters
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Location
* Definition of the term (normative): "The horizontal distance (in meters) from the given decimalLatitude and decimalLongitude describing the smallest circle containing the whole of the Location. Leave the value empty if the uncertainty is unknown, cannot be estimated, or is not applicable (because there are no coordinates). Zero is not a valid value for this term."
* Usage comments (recommendations regarding content, etc., not normative): ""
* Examples (not normative): "`30` (reasonable lower limit on or after 20~2~**0**0-05-01 of a GPS reading under good conditions if the actual precision was not recorded at the time). 100 (reasonable lower limit before 20~2~**0**0-05-01 of a GPS reading under good conditions if the actual precision was not recorded at the time). 71 (uncertainty for a UTM coordinate having 100 meter precision and a known spatial reference system)."
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): http://rs.tdwg.org/dwc/terms/version/coordinateUncertaintyInMeters-2021-07-15
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/Gathering/SiteCoordinateSets/SiteCoordinates/CoordinatesLatLon/CoordinateErrorDistanceInMeters
| process | change term coordinateuncertaintyinmeters term change submitter john wieczorek efficacy justification why is this change necessary outright error the date selective availability was turned off is wrong demand justification if the change is semantic in nature name at least two organizations that independently need this term truth and justice stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version none current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes coordinateuncertaintyinmeters organized in class e g occurrence event location taxon location definition of the term normative the horizontal distance in meters from the given decimallatitude and decimallongitude describing the smallest circle containing the whole of the location leave the value empty if the uncertainty is unknown cannot be estimated or is not applicable because there are no coordinates zero is not a valid value for this term usage comments recommendations regarding content etc not normative examples not normative reasonable lower limit on or after of a gps reading under good conditions if the actual precision was not recorded at the time reasonable lower limit before of a gps reading under good conditions if the actual precision was not recorded at the time uncertainty for a utm coordinate having meter precision and a known spatial reference system refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative abcd xpath of the equivalent term in abcd or efg not normative datasets dataset units unit gathering sitecoordinatesets sitecoordinates coordinateslatlon coordinateerrordistanceinmeters | 1 |
17,718 | 23,619,080,787 | IssuesEvent | 2022-08-24 18:39:59 | Jigsaw-Code/outline-client | https://api.github.com/repos/Jigsaw-Code/outline-client | closed | Push current Linux client to S3 | os/linux release process | It seems the Linux client is not updated on S3, based on what I saw at https://github.com/Jigsaw-Code/outline-client/issues/1089.
We need to:
- [x] Push the current client to S3
- [x] Make sure the release process updates S3
/cc @jyyi1 @daniellacosse | 1.0 | Push current Linux client to S3 - It seems the Linux client is not updated on S3, based on what I saw at https://github.com/Jigsaw-Code/outline-client/issues/1089.
We need to:
- [x] Push the current client to S3
- [x] Make sure the release process updates S3
/cc @jyyi1 @daniellacosse | process | push current linux client to it seems the linux client is not updated on based on what i saw at we need to push the current client to make sure the release process updates cc daniellacosse | 1 |
27,045 | 4,272,490,772 | IssuesEvent | 2016-07-13 14:39:51 | NishantUpadhyay-BTC/BLISS-Issue-Tracking | https://api.github.com/repos/NishantUpadhyay-BTC/BLISS-Issue-Tracking | reopened | #1378 - Guest UI: Availability Table: Graphics Changes | Change Request Deployed to Test | From my recent email to Sherry, Amit, and Nishant.
First:
On the Availability Table - the first one with the red/green blocks - can we have additional rows that separate out the months? So, for instance, as I scroll through June and then get to July, the first row of July would be a different colored row that spans all rows (no cell divisions) with the word “July” centered. Possible? I’m thinking the row color might be the dark green that we’re using for buttons - or maybe the brown that we’re using in the table header row. Sherry - can you mock that up, please? We’ll also need to see how this looks in Mobile - when the table switches columns and rows. I’m thinking the month division would switch to a thin column division with the text orientation flipped sideways (i.e. turned counter-clockwise so read from the right)
Second:
Again in the Availability Table, our current solution for displaying Gender Only and Adult Only times is working OK (Capitol Letter corresponds to the special time type appearing in the Date Column), but some people found this non-intuitive or too subtle. We’re interested in exploring other ways we might indicate special times on the Availability Table. Here are some ideas: 1. Some kind of colored hash through all the boxes on that date- - 2. Have a phrase that spans across all rows on that date, e.g. “Adults Only", “Women Only”, “Men Only”. Other ideas from any of you? Again, Sherry - could you put together some visuals? We’ll need mobile views, too of course.
Third:
Again on the Availability Table, We’re running into confusion (which we all knew would happen) when users try to book a reservation for time periods that look like a particular lodging is available, but it turns out that actually only shared lodgings were available for one or more of the days of their reservation. We’ve programmed responses for some of these cases, but we’re coming up with more use cases that would be too cumbersome to work out. Our strong feeling here is that we need a way for the Availability Table to display differently when only shared lodgings are available for a particular type of lodging on a particular day. After much debate and brainstorming, the idea we’ve settled on is to use a third color: Yellow - which we’d like to present for these cases. This would also need to be added to the color key above. Again, Sherry - can you pick a yellow (or something like yellow) that you think looks good and mock this up.
And, now I think of it, Sherry - it would probably save you time and effort to just mock up one graphic of the Availability Table in desktop/mobile/iPad views that shows all the changes and options discussed above.

| 1.0 | #1378 - Guest UI: Availability Table: Graphics Changes - From my recent email to Sherry, Amit, and Nishant.
First:
On the Availability Table - the first one with the red/green blocks - can we have additional rows that separate out the months? So, for instance, as I scroll through June and then get to July, the first row of July would be a different colored row that spans all rows (no cell divisions) with the word “July” centered. Possible? I’m thinking the row color might be the dark green that we’re using for buttons - or maybe the brown that we’re using in the table header row. Sherry - can you mock that up, please? We’ll also need to see how this looks in Mobile - when the table switches columns and rows. I’m thinking the month division would switch to a thin column division with the text orientation flipped sideways (i.e. turned counter-clockwise so read from the right)
Second:
Again in the Availability Table, our current solution for displaying Gender Only and Adult Only times is working OK (Capitol Letter corresponds to the special time type appearing in the Date Column), but some people found this non-intuitive or too subtle. We’re interested in exploring other ways we might indicate special times on the Availability Table. Here are some ideas: 1. Some kind of colored hash through all the boxes on that date- - 2. Have a phrase that spans across all rows on that date, e.g. “Adults Only", “Women Only”, “Men Only”. Other ideas from any of you? Again, Sherry - could you put together some visuals? We’ll need mobile views, too of course.
Third:
Again on the Availability Table, We’re running into confusion (which we all knew would happen) when users try to book a reservation for time periods that look like a particular lodging is available, but it turns out that actually only shared lodgings were available for one or more of the days of their reservation. We’ve programmed responses for some of these cases, but we’re coming up with more use cases that would be too cumbersome to work out. Our strong feeling here is that we need a way for the Availability Table to display differently when only shared lodgings are available for a particular type of lodging on a particular day. After much debate and brainstorming, the idea we’ve settled on is to use a third color: Yellow - which we’d like to present for these cases. This would also need to be added to the color key above. Again, Sherry - can you pick a yellow (or something like yellow) that you think looks good and mock this up.
And, now I think of it, Sherry - it would probably save you time and effort to just mock up one graphic of the Availability Table in desktop/mobile/iPad views that shows all the changes and options discussed above.

| non_process | guest ui availability table graphics changes from my recent email to sherry amit and nishant first on the availability table the first one with the red green blocks can we have additional rows that separate out the months so for instance as i scroll through june and then get to july the first row of july would be a different colored row that spans all rows no cell divisions with the word “july” centered possible i’m thinking the row color might be the dark green that we’re using for buttons or maybe the brown that we’re using in the table header row sherry can you mock that up please we’ll also need to see how this looks in mobile when the table switches columns and rows i’m thinking the month division would switch to a thin column division with the text orientation flipped sideways i e turned counter clockwise so read from the right second again in the availability table our current solution for displaying gender only and adult only times is working ok capitol letter corresponds to the special time type appearing in the date column but some people found this non intuitive or too subtle we’re interested in exploring other ways we might indicate special times on the availability table here are some ideas some kind of colored hash through all the boxes on that date have a phrase that spans across all rows on that date e g “adults only “women only” “men only” other ideas from any of you again sherry could you put together some visuals we’ll need mobile views too of course third again on the availability table we’re running into confusion which we all knew would happen when users try to book a reservation for time periods that look like a particular lodging is available but it turns out that actually only shared lodgings were available for one or more of the days of their reservation we’ve programmed responses for some of these cases but we’re coming up with more use cases that would be too cumbersome to work out our strong feeling here is that we need a way for the availability table to display differently when only shared lodgings are available for a particular type of lodging on a particular day after much debate and brainstorming the idea we’ve settled on is to use a third color yellow which we’d like to present for these cases this would also need to be added to the color key above again sherry can you pick a yellow or something like yellow that you think looks good and mock this up and now i think of it sherry it would probably save you time and effort to just mock up one graphic of the availability table in desktop mobile ipad views that shows all the changes and options discussed above | 0 |
17,064 | 22,501,423,960 | IssuesEvent | 2022-06-23 12:11:45 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | Print out start instructions in compact logger | team/process-automation | **Description**
Log information about start instructions in record logging.
Also add information about start instructions to record interface to unblock https://github.com/camunda/zeebe-process-test/issues/411 | 1.0 | Print out start instructions in compact logger - **Description**
Log information about start instructions in record logging.
Also add information about start instructions to record interface to unblock https://github.com/camunda/zeebe-process-test/issues/411 | process | print out start instructions in compact logger description log information about start instructions in record logging also add information about start instructions to record interface to unblock | 1 |
16,899 | 22,203,055,426 | IssuesEvent | 2022-06-07 12:54:45 | pycaret/pycaret | https://api.github.com/repos/pycaret/pycaret | closed | 3.0.0.rc2 - Hourly data missing imputation error on time_series module | time_series preprocessing impute | ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [ ] I have confirmed this bug exists on the develop branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@develop).
### Issue Description
I havemissing values in data. However, when i set the 'numeric_imputation_target' on setup part, it doesnt work.
### Reproducible Example
```python
s = setup(data=real_time_consumption, verbose=True, fh=3, session_id=42)
```
### Expected Behavior
.
### Actual Results
```python-traceback
ValueError:
Time Series modeling automation relies on running statistical tests, plots, etc.
Many of these can not be run when data has missing values.
Your target has 7 missing values and `numeric_imputation_target` is set to `None`.
Please enable imputation to proceed.
```
### Installed Versions
<details>
Replace this line with the output of the version code above.
</details>
| 1.0 | 3.0.0.rc2 - Hourly data missing imputation error on time_series module - ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [ ] I have confirmed this bug exists on the develop branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@develop).
### Issue Description
I havemissing values in data. However, when i set the 'numeric_imputation_target' on setup part, it doesnt work.
### Reproducible Example
```python
s = setup(data=real_time_consumption, verbose=True, fh=3, session_id=42)
```
### Expected Behavior
.
### Actual Results
```python-traceback
ValueError:
Time Series modeling automation relies on running statistical tests, plots, etc.
Many of these can not be run when data has missing values.
Your target has 7 missing values and `numeric_imputation_target` is set to `None`.
Please enable imputation to proceed.
```
### Installed Versions
<details>
Replace this line with the output of the version code above.
</details>
| process | hourly data missing imputation error on time series module pycaret version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pycaret i have confirmed this bug exists on the develop branch of pycaret pip install u git issue description i havemissing values in data however when i set the numeric imputation target on setup part it doesnt work reproducible example python s setup data real time consumption verbose true fh session id expected behavior actual results python traceback valueerror time series modeling automation relies on running statistical tests plots etc many of these can not be run when data has missing values your target has missing values and numeric imputation target is set to none please enable imputation to proceed installed versions replace this line with the output of the version code above | 1 |
439,392 | 30,694,988,280 | IssuesEvent | 2023-07-26 17:51:50 | pwa-builder/PWABuilder | https://api.github.com/repos/pwa-builder/PWABuilder | opened | [DOCS] Docs about token flow and promotion requirements | documentation | Full documentation for new free Msoft dev promotion:
Checking if your PWA qualifies ➡️ getting your token ➡️ where to use that token.
Also going to experiment with an entry field to take you straight to the validation page. | 1.0 | [DOCS] Docs about token flow and promotion requirements - Full documentation for new free Msoft dev promotion:
Checking if your PWA qualifies ➡️ getting your token ➡️ where to use that token.
Also going to experiment with an entry field to take you straight to the validation page. | non_process | docs about token flow and promotion requirements full documentation for new free msoft dev promotion checking if your pwa qualifies ➡️ getting your token ➡️ where to use that token also going to experiment with an entry field to take you straight to the validation page | 0 |
32,613 | 7,552,531,868 | IssuesEvent | 2018-04-19 00:51:37 | dickschoeller/gedbrowser | https://api.github.com/repos/dickschoeller/gedbrowser | closed | Tests of API controllers, crud classes and helpers | code smell in progress | Right now the tests are:
* all through the controllers
* don't check behaviors well
Fix to:
* Test the helpers directly
* Test the CRUDs directly
* Really check the results
* Don't check the JSON from the controllers
Coverage issues:
* ApiFamily
* SaveController
* ApiSource
* GedWriter
* ApiSubmitter | 1.0 | Tests of API controllers, crud classes and helpers - Right now the tests are:
* all through the controllers
* don't check behaviors well
Fix to:
* Test the helpers directly
* Test the CRUDs directly
* Really check the results
* Don't check the JSON from the controllers
Coverage issues:
* ApiFamily
* SaveController
* ApiSource
* GedWriter
* ApiSubmitter | non_process | tests of api controllers crud classes and helpers right now the tests are all through the controllers don t check behaviors well fix to test the helpers directly test the cruds directly really check the results don t check the json from the controllers coverage issues apifamily savecontroller apisource gedwriter apisubmitter | 0 |
40,627 | 10,075,724,597 | IssuesEvent | 2019-07-24 14:48:20 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | jcache onheap structure at non default partition counts data missing | Module: ICache Team: Core Type: Critical Type: Defect | a simple test loading 4 structures (mapBak1HD, mapBak1, cacheBak1HD, cacheBak1)
with 10000 key values each
then testing the size of the structures to be 10000
http://jenkins.hazelcast.com/view/stable/job/stable-partition-data/1/console
http://54.147.27.51/~jenkins/workspace/stable-partition-data/3.12.1/2019_07_24-05_24_53/partition-count-data
at some non default prime partition counts we see "cacheBak1" on heap jcach structure
fails the test.
```
[jenkins@ip-10-72-134-107 partition-count-data]$ pwd
/disk1/jenkins/workspace/stable-partition-data/3.12.1/2019_07_24-05_24_53/partition-count-data
```
```
$ find . -name out.txt | xargs grep fail | grep Member1
./partition_count1049/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count389/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count887/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count1259/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count1163/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count677/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count953/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1031/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count1279/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1193/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count937/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count1217/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count733/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count859/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1013/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1021/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count647/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count641/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count1123/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9984 != expected 10000
./partition_count709/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1297/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count463/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count431/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1319/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count631/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count863/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9981 != expected 10000
./partition_count811/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1213/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count683/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count613/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count1063/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9985 != expected 10000
./partition_count911/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9990 != expected 10000
./partition_count727/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count643/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count701/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9988 != expected 10000
./partition_count947/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9986 != expected 10000
./partition_count1171/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9990 != expected 10000
./partition_count283/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count757/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count857/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9988 != expected 10000
./partition_count653/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9990 != expected 10000
./partition_count1301/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count331/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count823/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count919/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count373/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count587/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count1103/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1231/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count421/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1117/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count467/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count787/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1277/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count739/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count359/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count853/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9983 != expected 10000
./partition_count401/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count991/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count929/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count883/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9985 != expected 10000
./partition_count1051/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1009/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count449/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count967/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9986 != expected 10000
./partition_count599/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count443/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9988 != expected 10000
./partition_count1069/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1223/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count829/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count1151/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count1129/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count487/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1181/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1153/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9981 != expected 10000
./partition_count419/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count379/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count907/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count281/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count1321/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count1091/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count971/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count659/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count293/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count541/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count479/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1033/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count619/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count773/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count797/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count743/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count673/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count661/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count691/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1097/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1291/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1249/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9985 != expected 10000
./partition_count491/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1019/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1187/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count367/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1061/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count877/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count1237/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count457/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count601/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count503/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count593/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count1201/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count1087/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9984 != expected 10000
./partition_count1289/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count577/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count1307/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count827/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count523/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count1283/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count761/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1303/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9985 != expected 10000
./partition_count1109/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count821/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1229/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count353/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count997/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9987 != expected 10000
./partition_count547/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count977/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count1039/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count607/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1093/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9987 != expected 10000
./partition_count439/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count769/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count809/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count941/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count571/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count569/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
``` | 1.0 | jcache onheap structure at non default partition counts data missing - a simple test loading 4 structures (mapBak1HD, mapBak1, cacheBak1HD, cacheBak1)
with 10000 key values each
then testing the size of the structures to be 10000
http://jenkins.hazelcast.com/view/stable/job/stable-partition-data/1/console
http://54.147.27.51/~jenkins/workspace/stable-partition-data/3.12.1/2019_07_24-05_24_53/partition-count-data
at some non default prime partition counts we see "cacheBak1" on heap jcach structure
fails the test.
```
[jenkins@ip-10-72-134-107 partition-count-data]$ pwd
/disk1/jenkins/workspace/stable-partition-data/3.12.1/2019_07_24-05_24_53/partition-count-data
```
```
$ find . -name out.txt | xargs grep fail | grep Member1
./partition_count1049/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count389/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count887/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count1259/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count1163/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count677/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count953/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1031/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count1279/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1193/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count937/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count1217/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count733/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count859/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1013/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1021/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count647/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count641/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count1123/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9984 != expected 10000
./partition_count709/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1297/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count463/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count431/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1319/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count631/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count863/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9981 != expected 10000
./partition_count811/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1213/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count683/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count613/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count1063/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9985 != expected 10000
./partition_count911/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9990 != expected 10000
./partition_count727/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count643/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count701/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9988 != expected 10000
./partition_count947/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9986 != expected 10000
./partition_count1171/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9990 != expected 10000
./partition_count283/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count757/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count857/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9988 != expected 10000
./partition_count653/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9990 != expected 10000
./partition_count1301/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count331/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count823/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count919/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count373/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count587/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count1103/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1231/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count421/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1117/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count467/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count787/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1277/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count739/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count359/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count853/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9983 != expected 10000
./partition_count401/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count991/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count929/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count883/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9985 != expected 10000
./partition_count1051/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1009/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count449/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count967/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9986 != expected 10000
./partition_count599/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count443/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9988 != expected 10000
./partition_count1069/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1223/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count829/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count1151/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count1129/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count487/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1181/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1153/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9981 != expected 10000
./partition_count419/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count379/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count907/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count281/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count1321/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count1091/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count971/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count659/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count293/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count541/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count479/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1033/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count619/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count773/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count797/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count743/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count673/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count661/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count691/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1097/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1291/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1249/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9985 != expected 10000
./partition_count491/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1019/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1187/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count367/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count1061/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count877/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count1237/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count457/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count601/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count503/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count593/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count1201/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count1087/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9984 != expected 10000
./partition_count1289/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count577/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count1307/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count827/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count523/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9999 != expected 10000
./partition_count1283/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count761/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count1303/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9985 != expected 10000
./partition_count1109/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count821/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
./partition_count1229/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count353/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count997/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9987 != expected 10000
./partition_count547/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9989 != expected 10000
./partition_count977/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9994 != expected 10000
./partition_count1039/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count607/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9997 != expected 10000
./partition_count1093/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9987 != expected 10000
./partition_count439/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9996 != expected 10000
./partition_count769/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9991 != expected 10000
./partition_count809/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9993 != expected 10000
./partition_count941/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9992 != expected 10000
./partition_count571/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9995 != expected 10000
./partition_count569/out.txt:fail HzMember1HZ size_cacheBak1 hzcmd.cache.validate.Size threadId=0 global.AssertionException: cacheBak1 size 9998 != expected 10000
``` | non_process | jcache onheap structure at non default partition counts data missing a simple test loading structures with key values each then testing the size of the structures to be at some non default prime partition counts we see on heap jcach structure fails the test pwd jenkins workspace stable partition data partition count data find name out txt xargs grep fail grep partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected partition out txt fail size hzcmd cache validate size threadid global assertionexception size expected | 0 |
69,257 | 3,296,408,608 | IssuesEvent | 2015-11-01 20:41:23 | dwyl/dwyl.github.io | https://api.github.com/repos/dwyl/dwyl.github.io | closed | Incorrect CSS overriding styles for the rest of the page | bug priority-2 | Incorrect CSS here:
<img width="385" alt="screen shot 2015-10-29 at 23 08 01" src="https://cloud.githubusercontent.com/assets/4185328/10834528/f9c1b7d6-7e91-11e5-9450-be2b4c949c48.png">
Is causing all of the paragraph text on the page to be grey (`color: #616161;`). | 1.0 | Incorrect CSS overriding styles for the rest of the page - Incorrect CSS here:
<img width="385" alt="screen shot 2015-10-29 at 23 08 01" src="https://cloud.githubusercontent.com/assets/4185328/10834528/f9c1b7d6-7e91-11e5-9450-be2b4c949c48.png">
Is causing all of the paragraph text on the page to be grey (`color: #616161;`). | non_process | incorrect css overriding styles for the rest of the page incorrect css here img width alt screen shot at src is causing all of the paragraph text on the page to be grey color | 0 |
873 | 3,332,222,412 | IssuesEvent | 2015-11-11 19:11:48 | pwittchen/ReactiveBeacons | https://api.github.com/repos/pwittchen/ReactiveBeacons | closed | Release 0.3.2 | release process | **Initial release notes**:
- bug fix: wrapped `BluetoothManager` inside `isBleSupported()` to avoid `NoClassDefFound` error occurring while instantiating `ReactiveBeacons` object on devices running API < 18 - fixed in PR #30.
**Things to do**:
- [x] bump library version :point_right: PR #32
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release | 1.0 | Release 0.3.2 - **Initial release notes**:
- bug fix: wrapped `BluetoothManager` inside `isBleSupported()` to avoid `NoClassDefFound` error occurring while instantiating `ReactiveBeacons` object on devices running API < 18 - fixed in PR #30.
**Things to do**:
- [x] bump library version :point_right: PR #32
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release | process | release initial release notes bug fix wrapped bluetoothmanager inside isblesupported to avoid noclassdeffound error occurring while instantiating reactivebeacons object on devices running api fixed in pr things to do bump library version point right pr upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md create new github release | 1 |
103,766 | 11,372,210,720 | IssuesEvent | 2020-01-28 01:01:03 | SETI/pds-opus | https://api.github.com/repos/SETI/pds-opus | closed | Need new API Guide | A-Enhancement B-Documentation Effort 2 Medium Priority 3 Important | The current API Guide is based on a horrible YAML infrastructure that limits the kind of formatting we can do. For the February goal we need a new, pretty, and complete API Guide.
| 1.0 | Need new API Guide - The current API Guide is based on a horrible YAML infrastructure that limits the kind of formatting we can do. For the February goal we need a new, pretty, and complete API Guide.
| non_process | need new api guide the current api guide is based on a horrible yaml infrastructure that limits the kind of formatting we can do for the february goal we need a new pretty and complete api guide | 0 |
18,502 | 24,551,247,947 | IssuesEvent | 2022-10-12 12:46:50 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] Study activities screen > 'No activities found' error message getting displayed for few seconds in the following screen | Bug P1 iOS Process: Fixed Process: Tested dev | Steps:
1. Sign up or sign in to the mobile app
2. Enroll to the study
3. Navigate to study activities screen and observe
AR: 'No activities found' error message getting displayed for few seconds
ER: 'No activities found' error message should not get displayed
[Issue observed when user enters to study activities screen for the fist time after enrolling into the study]
https://user-images.githubusercontent.com/71445210/182616320-5f56a357-7011-4c49-b0b0-b6ebdae1ff9a.mp4
| 2.0 | [iOS] Study activities screen > 'No activities found' error message getting displayed for few seconds in the following screen - Steps:
1. Sign up or sign in to the mobile app
2. Enroll to the study
3. Navigate to study activities screen and observe
AR: 'No activities found' error message getting displayed for few seconds
ER: 'No activities found' error message should not get displayed
[Issue observed when user enters to study activities screen for the fist time after enrolling into the study]
https://user-images.githubusercontent.com/71445210/182616320-5f56a357-7011-4c49-b0b0-b6ebdae1ff9a.mp4
| process | study activities screen no activities found error message getting displayed for few seconds in the following screen steps sign up or sign in to the mobile app enroll to the study navigate to study activities screen and observe ar no activities found error message getting displayed for few seconds er no activities found error message should not get displayed | 1 |
775,402 | 27,233,188,768 | IssuesEvent | 2023-02-21 14:37:41 | k3s-io/k3s | https://api.github.com/repos/k3s-io/k3s | closed | Install script improvements - preflight checks | kind/enhancement priority/important-longterm | Perhaps leverage check-config script or pass a `--preflight-checks` flag to check for any potential issues prior to an install.
For example, check if NetworkManager is utilized, if so warn the user this could cause issues (related to https://github.com/rancher/rke2/issues/786) | 1.0 | Install script improvements - preflight checks - Perhaps leverage check-config script or pass a `--preflight-checks` flag to check for any potential issues prior to an install.
For example, check if NetworkManager is utilized, if so warn the user this could cause issues (related to https://github.com/rancher/rke2/issues/786) | non_process | install script improvements preflight checks perhaps leverage check config script or pass a preflight checks flag to check for any potential issues prior to an install for example check if networkmanager is utilized if so warn the user this could cause issues related to | 0 |
17,103 | 22,624,462,527 | IssuesEvent | 2022-06-30 09:25:03 | pyanodon/pybugreports | https://api.github.com/repos/pyanodon/pybugreports | closed | Deadlock stacks compatibilty | confirmed WIP postprocess-fail crash compatibility | ### Mod source
Github
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
< Windows 10
### What kind of issue is this?
- [X] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [X] Pypostprocessing failure
- [ ] Other
### What is the problem?
Deadlock stacking need to be compatable
### Steps to reproduce
1.Install Py Beta Preview Suite
2.Install Dedlocks's stacking for py
3.Install Dedlocks's stacking
4.Run Factorio
### Additional context

### Log file
_No response_ | 1.0 | Deadlock stacks compatibilty - ### Mod source
Github
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
< Windows 10
### What kind of issue is this?
- [X] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [X] Pypostprocessing failure
- [ ] Other
### What is the problem?
Deadlock stacking need to be compatable
### Steps to reproduce
1.Install Py Beta Preview Suite
2.Install Dedlocks's stacking for py
3.Install Dedlocks's stacking
4.Run Factorio
### Additional context

### Log file
_No response_ | process | deadlock stacks compatibilty mod source github which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem deadlock stacking need to be compatable steps to reproduce install py beta preview suite install dedlocks s stacking for py install dedlocks s stacking run factorio additional context log file no response | 1 |
8,207 | 11,402,600,084 | IssuesEvent | 2020-01-31 03:55:29 | scala/community-build | https://api.github.com/repos/scala/community-build | closed | dependencies.txt should be in deterministic order | process | because it's annoying for it constantly to show up in `git status` (and not only that, it also leads to me being lazy about checking in the changes when something actually _has_ changed) | 1.0 | dependencies.txt should be in deterministic order - because it's annoying for it constantly to show up in `git status` (and not only that, it also leads to me being lazy about checking in the changes when something actually _has_ changed) | process | dependencies txt should be in deterministic order because it s annoying for it constantly to show up in git status and not only that it also leads to me being lazy about checking in the changes when something actually has changed | 1 |
12,560 | 14,979,675,511 | IssuesEvent | 2021-01-28 12:36:02 | parcel-bundler/parcel | https://api.github.com/repos/parcel-bundler/parcel | closed | JS imported css files are bundled in the wrong order | :bug: Bug :clock1: Waiting CSS Preprocessing ✨ Parcel 2 | <!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# 🐛 bug report
<!--- Provide a general summary of the issue here -->
I'm importing these two css and scss files in my apps index.js:
```js
import 'normalize.css';
import './styles/main.scss';
```
Although `normalize.css` is imported before `main.scss` it ends up to the bottom of final bundled .css file and overrides values set in `main.scss` due to how css works.
## 🎛 Configuration (.babelrc, package.json, cli command)
<!--- If describing a bug, tell us what your babel configuration looks like -->
```js
{
"name": "test",
"version": "1.0.0",
"description": "desc",
"main": "index.js",
"scripts": {
"start": "parcel serve ./src/index.html",
"build": "parcel build ./src/index.html"
},
"devDependencies": {
"parcel": "^2.0.0-beta.1",
"sass": "^1.30.0"
},
"dependencies": {
"normalize.css": "^8.0.1"
}
}
```
## 🤔 Expected Behavior
<!--- Tell us what should happen -->
The final bundled .css should look like this:
```
main.css (compiled)
normalize.css
```
## 😯 Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
But instead currently looks like this:
```
normalize.css
main.css (compiled)
```
<!--- If you are seeing an error, please include the full error message and stack trace -->
## 💁 Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug -->
This might be a false lead; But from what I tested, seems like parcel might currently sort .css files by filesize(?) The .css file that had fewest code seemed to be at the top of bundled .css file and the one that had most code was on the bottom. Converting the .scss file to normal .css didn't seem to help either, the order stayed the same.
## 🔦 Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
I was trying to use normalize.css with my app and build my own css(scss) on top, but due to css file import being out of order I couldn't do that.
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## 💻 Code Sample
<!-- Please provide a code repository, gist, code snippet or sample files to reproduce the issue -->
Have 2 or more css/scss imports inside index.js and try to build.
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 2.0.0-beta.1
| Node | 15.3.0
| npm/Yarn | Yarn 1.22.10
| Operating System | MacOS Big Sur (11.0.1)
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
| 1.0 | JS imported css files are bundled in the wrong order - <!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# 🐛 bug report
<!--- Provide a general summary of the issue here -->
I'm importing these two css and scss files in my apps index.js:
```js
import 'normalize.css';
import './styles/main.scss';
```
Although `normalize.css` is imported before `main.scss` it ends up to the bottom of final bundled .css file and overrides values set in `main.scss` due to how css works.
## 🎛 Configuration (.babelrc, package.json, cli command)
<!--- If describing a bug, tell us what your babel configuration looks like -->
```js
{
"name": "test",
"version": "1.0.0",
"description": "desc",
"main": "index.js",
"scripts": {
"start": "parcel serve ./src/index.html",
"build": "parcel build ./src/index.html"
},
"devDependencies": {
"parcel": "^2.0.0-beta.1",
"sass": "^1.30.0"
},
"dependencies": {
"normalize.css": "^8.0.1"
}
}
```
## 🤔 Expected Behavior
<!--- Tell us what should happen -->
The final bundled .css should look like this:
```
main.css (compiled)
normalize.css
```
## 😯 Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
But instead currently looks like this:
```
normalize.css
main.css (compiled)
```
<!--- If you are seeing an error, please include the full error message and stack trace -->
## 💁 Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug -->
This might be a false lead; But from what I tested, seems like parcel might currently sort .css files by filesize(?) The .css file that had fewest code seemed to be at the top of bundled .css file and the one that had most code was on the bottom. Converting the .scss file to normal .css didn't seem to help either, the order stayed the same.
## 🔦 Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
I was trying to use normalize.css with my app and build my own css(scss) on top, but due to css file import being out of order I couldn't do that.
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## 💻 Code Sample
<!-- Please provide a code repository, gist, code snippet or sample files to reproduce the issue -->
Have 2 or more css/scss imports inside index.js and try to build.
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 2.0.0-beta.1
| Node | 15.3.0
| npm/Yarn | Yarn 1.22.10
| Operating System | MacOS Big Sur (11.0.1)
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
| process | js imported css files are bundled in the wrong order thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before 🐛 bug report i m importing these two css and scss files in my apps index js js import normalize css import styles main scss although normalize css is imported before main scss it ends up to the bottom of final bundled css file and overrides values set in main scss due to how css works 🎛 configuration babelrc package json cli command js name test version description desc main index js scripts start parcel serve src index html build parcel build src index html devdependencies parcel beta sass dependencies normalize css 🤔 expected behavior the final bundled css should look like this main css compiled normalize css 😯 current behavior but instead currently looks like this normalize css main css compiled 💁 possible solution this might be a false lead but from what i tested seems like parcel might currently sort css files by filesize the css file that had fewest code seemed to be at the top of bundled css file and the one that had most code was on the bottom converting the scss file to normal css didn t seem to help either the order stayed the same 🔦 context i was trying to use normalize css with my app and build my own css scss on top but due to css file import being out of order i couldn t do that 💻 code sample have or more css scss imports inside index js and try to build 🌍 your environment software version s parcel beta node npm yarn yarn operating system macos big sur love parcel please consider supporting our collective 👉 | 1 |
5,190 | 7,970,406,036 | IssuesEvent | 2018-07-16 12:41:19 | rubberduck-vba/Rubberduck | https://api.github.com/repos/rubberduck-vba/Rubberduck | opened | COM Collector needs to handle indexed TLBs and referenced, but unregistered type libraries | feature-reference-explorer library-specific parse-tree-processing | Using a command like `ThisWorkbook.VBProject.References.AddFromFile "C:\Program Files (x86)\Common Files\microsoft shared\VBA\VBA7.1\VBE7.dll\3"` (note the index numeral, at the tail of the path), it's possible to add a `Reference` that isn't the default TLB for a file (in this case, the `VBInternal` TLB is referenced), and that reference need not be registered. VBA happily allows this, but Rubberduck is unable to successfully parse the type library.
The reasons are potentially twofold:
- `VBInternal` is not registered, so a call to `References("VBInternal").FullPath` results in an error.
> Run-time error '-2147319779 (8002801d)':
> Automation error
> Library not registered.
- The COM Collector is unaware of the indexed path syntax for targeting a particular TLB within a file
Interestingly, the References dialog does reveal the reference's complete path and index, as shown below, so it might be that VBIDE won't reveal the path, but the ITypeLib might.

Also, the Reference object also has a `Guid` property, and `VBInternal` does have a Guid, so it might be possible to discover the path, via the registry, to the file.
Once the COM collector has the path, we'll need to experiment with how to reach the indexed Type Library.
Once the COM Collector can find the TLB, it should be able to parse the TLB as if it was in a normal path. | 1.0 | COM Collector needs to handle indexed TLBs and referenced, but unregistered type libraries - Using a command like `ThisWorkbook.VBProject.References.AddFromFile "C:\Program Files (x86)\Common Files\microsoft shared\VBA\VBA7.1\VBE7.dll\3"` (note the index numeral, at the tail of the path), it's possible to add a `Reference` that isn't the default TLB for a file (in this case, the `VBInternal` TLB is referenced), and that reference need not be registered. VBA happily allows this, but Rubberduck is unable to successfully parse the type library.
The reasons are potentially twofold:
- `VBInternal` is not registered, so a call to `References("VBInternal").FullPath` results in an error.
> Run-time error '-2147319779 (8002801d)':
> Automation error
> Library not registered.
- The COM Collector is unaware of the indexed path syntax for targeting a particular TLB within a file
Interestingly, the References dialog does reveal the reference's complete path and index, as shown below, so it might be that VBIDE won't reveal the path, but the ITypeLib might.

Also, the Reference object also has a `Guid` property, and `VBInternal` does have a Guid, so it might be possible to discover the path, via the registry, to the file.
Once the COM collector has the path, we'll need to experiment with how to reach the indexed Type Library.
Once the COM Collector can find the TLB, it should be able to parse the TLB as if it was in a normal path. | process | com collector needs to handle indexed tlbs and referenced but unregistered type libraries using a command like thisworkbook vbproject references addfromfile c program files common files microsoft shared vba dll note the index numeral at the tail of the path it s possible to add a reference that isn t the default tlb for a file in this case the vbinternal tlb is referenced and that reference need not be registered vba happily allows this but rubberduck is unable to successfully parse the type library the reasons are potentially twofold vbinternal is not registered so a call to references vbinternal fullpath results in an error run time error automation error library not registered the com collector is unaware of the indexed path syntax for targeting a particular tlb within a file interestingly the references dialog does reveal the reference s complete path and index as shown below so it might be that vbide won t reveal the path but the itypelib might also the reference object also has a guid property and vbinternal does have a guid so it might be possible to discover the path via the registry to the file once the com collector has the path we ll need to experiment with how to reach the indexed type library once the com collector can find the tlb it should be able to parse the tlb as if it was in a normal path | 1 |
449,124 | 31,828,881,623 | IssuesEvent | 2023-09-14 09:17:58 | integrations/terraform-provider-github | https://api.github.com/repos/integrations/terraform-provider-github | opened | [DOCS]: Document how github_repository_ruleset.rules.required_status_checks.required_check.integration_id can be found | Type: Documentation Status: Triage | ### Describe the need
Its not clear from the documentation how the following property's value can be found so it may be correctly set:
```
github_repository_ruleset.rules.required_status_checks.required_check.integration_id can be found
```
Its also unclear from the GH API documentation, all they say is:
> integration_id integer
The optional integration ID that this status check must originate from.
### SDK Version
_No response_
### API Version
_No response_
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | [DOCS]: Document how github_repository_ruleset.rules.required_status_checks.required_check.integration_id can be found - ### Describe the need
Its not clear from the documentation how the following property's value can be found so it may be correctly set:
```
github_repository_ruleset.rules.required_status_checks.required_check.integration_id can be found
```
Its also unclear from the GH API documentation, all they say is:
> integration_id integer
The optional integration ID that this status check must originate from.
### SDK Version
_No response_
### API Version
_No response_
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | non_process | document how github repository ruleset rules required status checks required check integration id can be found describe the need its not clear from the documentation how the following property s value can be found so it may be correctly set github repository ruleset rules required status checks required check integration id can be found its also unclear from the gh api documentation all they say is integration id integer the optional integration id that this status check must originate from sdk version no response api version no response relevant log output no response code of conduct i agree to follow this project s code of conduct | 0 |
18,371 | 24,498,294,642 | IssuesEvent | 2022-10-10 10:34:53 | Blazebit/blaze-persistence | https://api.github.com/repos/Blazebit/blaze-persistence | closed | Concurrency issues with the annotation processor | kind: bug worth: medium component: entity-view-annotation-processor | We are using gradle 7.5.1 with java 17, also happened with gradle 6 and java 11. It happened with every BP version since 1.5.x and above.
It does happen in the CI (less often due to the build cache)
When we compile our codebase from scratch, it is about 90% likely that we get this exception/error. It seems to happen due to some concurrent attempt to write the same file. If the build cache is present and less views have to be generated, the issue is less likely
```
core | > java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
```
```
core | org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':core-bowl:compileJava'.
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:142)
core | at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:282)
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:140)
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:128)
core | at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77)
core | at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
core | at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
core | at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
core | at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
core | at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
core | at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:69)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:327)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:314)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:307)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:293)
core | at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:417)
core | at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:339)
core | at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
core | at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
core | Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.invocationHelper(JavacTaskImpl.java:168)
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:100)
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:94)
core | at org.gradle.internal.compiler.java.IncrementalCompileTask.call(IncrementalCompileTask.java:89)
core | at org.gradle.api.internal.tasks.compile.AnnotationProcessingCompileTask.call(AnnotationProcessingCompileTask.java:94)
core | at org.gradle.api.internal.tasks.compile.ResourceCleaningCompilationTask.call(ResourceCleaningCompilationTask.java:57)
core | at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:54)
core | at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:39)
core | at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.delegateAndHandleErrors(NormalizingJavaCompiler.java:97)
core | at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:51)
core | at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:37)
core | at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:51)
core | at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:37)
core | at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:46)
core | at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:36)
core | at org.gradle.jvm.toolchain.internal.DefaultToolchainJavaCompiler.execute(DefaultToolchainJavaCompiler.java:57)
core | at org.gradle.api.tasks.compile.JavaCompile.lambda$createToolchainCompiler$1(JavaCompile.java:232)
core | at org.gradle.api.internal.tasks.compile.CleaningJavaCompiler.execute(CleaningJavaCompiler.java:53)
core | at org.gradle.api.internal.tasks.compile.incremental.IncrementalCompilerFactory.lambda$createRebuildAllCompiler$0(IncrementalCompilerFactory.java:52)
core | at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:67)
core | at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:41)
core | at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:66)
core | at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:52)
core | at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$2.call(CompileJavaBuildOperationReportingCompiler.java:59)
core | at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$2.call(CompileJavaBuildOperationReportingCompiler.java:51)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
core | at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler.execute(CompileJavaBuildOperationReportingCompiler.java:51)
core | at org.gradle.api.tasks.compile.JavaCompile.performCompilation(JavaCompile.java:279)
core | at org.gradle.api.tasks.compile.JavaCompile.performIncrementalCompilation(JavaCompile.java:165)
core | at org.gradle.api.tasks.compile.JavaCompile.compile(JavaCompile.java:146)
core | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
core | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
core | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
core | at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:125)
core | at org.gradle.api.internal.project.taskfactory.IncrementalInputsTaskAction.doExecute(IncrementalInputsTaskAction.java:32)
core | at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:51)
core | at org.gradle.api.internal.project.taskfactory.AbstractIncrementalTaskAction.execute(AbstractIncrementalTaskAction.java:25)
core | at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:29)
core | at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:236)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:68)
core | at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:221)
core | at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:204)
core | at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:187)
core | at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:165)
core | at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:89)
core | at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:40)
core | at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:53)
core | at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:50)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
core | at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:50)
core | at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:40)
core | at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:68)
core | at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:38)
core | at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:41)
core | at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:74)
core | at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
core | at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:51)
core | at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:29)
core | at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.executeDelegateBroadcastingChanges(CaptureStateAfterExecutionStep.java:124)
core | at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:80)
core | at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:58)
core | at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:48)
core | at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:36)
core | at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:181)
core | at org.gradle.internal.execution.steps.BuildCacheStep.executeAndStoreInCache(BuildCacheStep.java:154)
core | at org.gradle.internal.execution.steps.BuildCacheStep.lambda$executeWithCache$4(BuildCacheStep.java:121)
core | at org.gradle.internal.execution.steps.BuildCacheStep.lambda$executeWithCache$5(BuildCacheStep.java:121)
core | at org.gradle.internal.Try$Success.map(Try.java:164)
core | at org.gradle.internal.execution.steps.BuildCacheStep.executeWithCache(BuildCacheStep.java:81)
core | at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$0(BuildCacheStep.java:70)
core | at org.gradle.internal.Either$Left.fold(Either.java:115)
core | at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:59)
core | at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:69)
core | at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:47)
core | at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:36)
core | at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:25)
core | at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:36)
core | at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:22)
core | at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:110)
core | at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:56)
core | at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:56)
core | at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:38)
core | at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:73)
core | at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:44)
core | at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
core | at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
core | at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:89)
core | at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:50)
core | at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:114)
core | at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:57)
core | at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:76)
core | at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:50)
core | at org.gradle.internal.execution.steps.SkipEmptyWorkStep.executeWithNoEmptySources(SkipEmptyWorkStep.java:254)
core | at org.gradle.internal.execution.steps.SkipEmptyWorkStep.executeWithNoEmptySources(SkipEmptyWorkStep.java:209)
core | at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:88)
core | at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:56)
core | at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:32)
core | at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:21)
core | at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
core | at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:43)
core | at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:31)
core | at org.gradle.internal.execution.steps.AssignWorkspaceStep.lambda$execute$0(AssignWorkspaceStep.java:40)
core | at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:281)
core | at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:40)
core | at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:30)
core | at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:37)
core | at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:27)
core | at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44)
core | at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:33)
core | at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:76)
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:139)
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:128)
core | at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77)
core | at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
core | at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
core | at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
core | at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
core | at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
core | at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:69)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:327)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:314)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:307)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:293)
core | at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:417)
core | at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:339)
core | at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
core | at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
core | Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor.await(EntityViewAnnotationProcessor.java:147)
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor.execute(EntityViewAnnotationProcessor.java:122)
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor.process(EntityViewAnnotationProcessor.java:99)
core | at org.gradle.api.internal.tasks.compile.processing.DelegatingProcessor.process(DelegatingProcessor.java:62)
core | at org.gradle.api.internal.tasks.compile.processing.IsolatingProcessor.process(IsolatingProcessor.java:50)
core | at org.gradle.api.internal.tasks.compile.processing.DelegatingProcessor.process(DelegatingProcessor.java:62)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.access$401(TimeTrackingProcessor.java:37)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor$5.create(TimeTrackingProcessor.java:99)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor$5.create(TimeTrackingProcessor.java:96)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.track(TimeTrackingProcessor.java:117)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.process(TimeTrackingProcessor.java:96)
core | at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:1023)
core | at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:939)
core | at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1267)
core | at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1382)
core | at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1234)
core | at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:916)
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.lambda$doCall$0(JavacTaskImpl.java:104)
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.invocationHelper(JavacTaskImpl.java:152)
core | ... 159 more
core | Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor.await(EntityViewAnnotationProcessor.java:145)
core | ... 177 more
core | Caused by: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
core | at jdk.compiler/com.sun.tools.javac.util.Assert.error(Assert.java:162)
core | at jdk.compiler/com.sun.tools.javac.code.ClassFinder.fillIn(ClassFinder.java:354)
core | at jdk.compiler/com.sun.tools.javac.code.ClassFinder.complete(ClassFinder.java:291)
core | at jdk.compiler/com.sun.tools.javac.code.Symbol.complete(Symbol.java:682)
core | at jdk.compiler/com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:1410)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.nameToSymbol(JavacElements.java:263)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.lambda$unboundNameToSymbol$2(JavacElements.java:208)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.unboundNameToSymbol(JavacElements.java:200)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.doGetElement(JavacElements.java:183)
core |
core | Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.
core |
core | You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
core |
core | See https://docs.gradle.org/7.5/userguide/command_line_interface.html#sec:command_line_warnings
core | 19 actionable tasks: 1 executed, 18 up-to-date
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.doGetTypeElement(JavacElements.java:173)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.getTypeElement(JavacElements.java:161)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.getTypeElement(JavacElements.java:88)
core | at com.blazebit.persistence.view.processor.Context.getTypeElement(Context.java:154)
core | at com.blazebit.persistence.view.processor.AttributeFilter.<init>(AttributeFilter.java:36)
core | at com.blazebit.persistence.view.processor.annotation.AnnotationMetaAttribute.addAttributeFilter(AnnotationMetaAttribute.java:369)
core | at com.blazebit.persistence.view.processor.annotation.AnnotationMetaAttribute.<init>(AnnotationMetaAttribute.java:206)
core | at com.blazebit.persistence.view.processor.annotation.AnnotationMetaSingularAttribute.<init>(AnnotationMetaSingularAttribute.java:31)
core | at com.blazebit.persistence.view.processor.annotation.MetaAttributeGenerationVisitor.visitDeclared(MetaAttributeGenerationVisitor.java:140)
core | at com.blazebit.persistence.view.processor.annotation.MetaAttributeGenerationVisitor.visitDeclared(MetaAttributeGenerationVisitor.java:44)
core | at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.accept(Type.java:1169)
core | at com.blazebit.persistence.view.processor.annotation.MetaAttributeGenerationVisitor.visitExecutable(MetaAttributeGenerationVisitor.java:157)
core | at com.blazebit.persistence.view.processor.annotation.MetaAttributeGenerationVisitor.visitExecutable(MetaAttributeGenerationVisitor.java:44)
core | at jdk.compiler/com.sun.tools.javac.code.Type$MethodType.accept(Type.java:1535)
core | at com.blazebit.persistence.view.processor.annotation.AnnotationMetaEntityView.<init>(AnnotationMetaEntityView.java:190)
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor$1.run(EntityViewAnnotationProcessor.java:135)
+9
``` | 1.0 | Concurrency issues with the annotation processor - We are using gradle 7.5.1 with java 17, also happened with gradle 6 and java 11. It happened with every BP version since 1.5.x and above.
It does happen in the CI (less often due to the build cache)
When we compile our codebase from scratch, it is about 90% likely that we get this exception/error. It seems to happen due to some concurrent attempt to write the same file. If the build cache is present and less views have to be generated, the issue is less likely
```
core | > java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
```
```
core | org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':core-bowl:compileJava'.
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:142)
core | at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:282)
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:140)
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:128)
core | at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77)
core | at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
core | at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
core | at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
core | at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
core | at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
core | at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:69)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:327)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:314)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:307)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:293)
core | at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:417)
core | at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:339)
core | at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
core | at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
core | Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.invocationHelper(JavacTaskImpl.java:168)
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:100)
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:94)
core | at org.gradle.internal.compiler.java.IncrementalCompileTask.call(IncrementalCompileTask.java:89)
core | at org.gradle.api.internal.tasks.compile.AnnotationProcessingCompileTask.call(AnnotationProcessingCompileTask.java:94)
core | at org.gradle.api.internal.tasks.compile.ResourceCleaningCompilationTask.call(ResourceCleaningCompilationTask.java:57)
core | at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:54)
core | at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:39)
core | at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.delegateAndHandleErrors(NormalizingJavaCompiler.java:97)
core | at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:51)
core | at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:37)
core | at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:51)
core | at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:37)
core | at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:46)
core | at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:36)
core | at org.gradle.jvm.toolchain.internal.DefaultToolchainJavaCompiler.execute(DefaultToolchainJavaCompiler.java:57)
core | at org.gradle.api.tasks.compile.JavaCompile.lambda$createToolchainCompiler$1(JavaCompile.java:232)
core | at org.gradle.api.internal.tasks.compile.CleaningJavaCompiler.execute(CleaningJavaCompiler.java:53)
core | at org.gradle.api.internal.tasks.compile.incremental.IncrementalCompilerFactory.lambda$createRebuildAllCompiler$0(IncrementalCompilerFactory.java:52)
core | at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:67)
core | at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:41)
core | at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:66)
core | at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:52)
core | at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$2.call(CompileJavaBuildOperationReportingCompiler.java:59)
core | at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$2.call(CompileJavaBuildOperationReportingCompiler.java:51)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
core | at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler.execute(CompileJavaBuildOperationReportingCompiler.java:51)
core | at org.gradle.api.tasks.compile.JavaCompile.performCompilation(JavaCompile.java:279)
core | at org.gradle.api.tasks.compile.JavaCompile.performIncrementalCompilation(JavaCompile.java:165)
core | at org.gradle.api.tasks.compile.JavaCompile.compile(JavaCompile.java:146)
core | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
core | at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
core | at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
core | at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:125)
core | at org.gradle.api.internal.project.taskfactory.IncrementalInputsTaskAction.doExecute(IncrementalInputsTaskAction.java:32)
core | at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:51)
core | at org.gradle.api.internal.project.taskfactory.AbstractIncrementalTaskAction.execute(AbstractIncrementalTaskAction.java:25)
core | at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:29)
core | at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:236)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:68)
core | at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:221)
core | at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:204)
core | at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:187)
core | at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:165)
core | at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:89)
core | at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:40)
core | at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:53)
core | at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:50)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
core | at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:50)
core | at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:40)
core | at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:68)
core | at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:38)
core | at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:41)
core | at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:74)
core | at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
core | at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:51)
core | at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:29)
core | at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.executeDelegateBroadcastingChanges(CaptureStateAfterExecutionStep.java:124)
core | at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:80)
core | at org.gradle.internal.execution.steps.CaptureStateAfterExecutionStep.execute(CaptureStateAfterExecutionStep.java:58)
core | at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:48)
core | at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:36)
core | at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:181)
core | at org.gradle.internal.execution.steps.BuildCacheStep.executeAndStoreInCache(BuildCacheStep.java:154)
core | at org.gradle.internal.execution.steps.BuildCacheStep.lambda$executeWithCache$4(BuildCacheStep.java:121)
core | at org.gradle.internal.execution.steps.BuildCacheStep.lambda$executeWithCache$5(BuildCacheStep.java:121)
core | at org.gradle.internal.Try$Success.map(Try.java:164)
core | at org.gradle.internal.execution.steps.BuildCacheStep.executeWithCache(BuildCacheStep.java:81)
core | at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$0(BuildCacheStep.java:70)
core | at org.gradle.internal.Either$Left.fold(Either.java:115)
core | at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:59)
core | at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:69)
core | at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:47)
core | at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:36)
core | at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:25)
core | at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:36)
core | at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:22)
core | at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:110)
core | at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:56)
core | at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:56)
core | at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:38)
core | at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:73)
core | at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:44)
core | at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
core | at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
core | at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:89)
core | at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:50)
core | at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:114)
core | at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:57)
core | at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:76)
core | at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:50)
core | at org.gradle.internal.execution.steps.SkipEmptyWorkStep.executeWithNoEmptySources(SkipEmptyWorkStep.java:254)
core | at org.gradle.internal.execution.steps.SkipEmptyWorkStep.executeWithNoEmptySources(SkipEmptyWorkStep.java:209)
core | at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:88)
core | at org.gradle.internal.execution.steps.SkipEmptyWorkStep.execute(SkipEmptyWorkStep.java:56)
core | at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:32)
core | at org.gradle.internal.execution.steps.RemoveUntrackedExecutionStateStep.execute(RemoveUntrackedExecutionStateStep.java:21)
core | at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
core | at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:43)
core | at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:31)
core | at org.gradle.internal.execution.steps.AssignWorkspaceStep.lambda$execute$0(AssignWorkspaceStep.java:40)
core | at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:281)
core | at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:40)
core | at org.gradle.internal.execution.steps.AssignWorkspaceStep.execute(AssignWorkspaceStep.java:30)
core | at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:37)
core | at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:27)
core | at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44)
core | at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:33)
core | at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:76)
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:139)
core | at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:128)
core | at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:77)
core | at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
core | at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
core | at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
core | at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
core | at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
core | at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
core | at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
core | at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
core | at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:69)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:327)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:314)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:307)
core | at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:293)
core | at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:417)
core | at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:339)
core | at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
core | at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
core | Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor.await(EntityViewAnnotationProcessor.java:147)
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor.execute(EntityViewAnnotationProcessor.java:122)
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor.process(EntityViewAnnotationProcessor.java:99)
core | at org.gradle.api.internal.tasks.compile.processing.DelegatingProcessor.process(DelegatingProcessor.java:62)
core | at org.gradle.api.internal.tasks.compile.processing.IsolatingProcessor.process(IsolatingProcessor.java:50)
core | at org.gradle.api.internal.tasks.compile.processing.DelegatingProcessor.process(DelegatingProcessor.java:62)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.access$401(TimeTrackingProcessor.java:37)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor$5.create(TimeTrackingProcessor.java:99)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor$5.create(TimeTrackingProcessor.java:96)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.track(TimeTrackingProcessor.java:117)
core | at org.gradle.api.internal.tasks.compile.processing.TimeTrackingProcessor.process(TimeTrackingProcessor.java:96)
core | at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.callProcessor(JavacProcessingEnvironment.java:1023)
core | at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:939)
core | at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1267)
core | at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1382)
core | at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1234)
core | at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:916)
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.lambda$doCall$0(JavacTaskImpl.java:104)
core | at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.invocationHelper(JavacTaskImpl.java:152)
core | ... 159 more
core | Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor.await(EntityViewAnnotationProcessor.java:145)
core | ... 177 more
core | Caused by: java.lang.AssertionError: Filling jar:file:///opt/gradlehome/caches/modules-2/files-2.1/com.blazebit/blaze-persistence-entity-view-api/9.42-SNAPSHOT/c5a2f256c00642483c692647c7ffc0e18709373a/blaze-persistence-entity-view-api-9.42-SNAPSHOT.jar!/com/blazebit/persistence/view/AttributeFilterProvider.class during DirectoryFileObject[/app/platform-projects/platform-borl/build/classes/java/main:de/kontextwork/dw/platform/borl/role/model/view/RoleIdView.class]
core | at jdk.compiler/com.sun.tools.javac.util.Assert.error(Assert.java:162)
core | at jdk.compiler/com.sun.tools.javac.code.ClassFinder.fillIn(ClassFinder.java:354)
core | at jdk.compiler/com.sun.tools.javac.code.ClassFinder.complete(ClassFinder.java:291)
core | at jdk.compiler/com.sun.tools.javac.code.Symbol.complete(Symbol.java:682)
core | at jdk.compiler/com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:1410)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.nameToSymbol(JavacElements.java:263)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.lambda$unboundNameToSymbol$2(JavacElements.java:208)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.unboundNameToSymbol(JavacElements.java:200)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.doGetElement(JavacElements.java:183)
core |
core | Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.
core |
core | You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
core |
core | See https://docs.gradle.org/7.5/userguide/command_line_interface.html#sec:command_line_warnings
core | 19 actionable tasks: 1 executed, 18 up-to-date
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.doGetTypeElement(JavacElements.java:173)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.getTypeElement(JavacElements.java:161)
core | at jdk.compiler/com.sun.tools.javac.model.JavacElements.getTypeElement(JavacElements.java:88)
core | at com.blazebit.persistence.view.processor.Context.getTypeElement(Context.java:154)
core | at com.blazebit.persistence.view.processor.AttributeFilter.<init>(AttributeFilter.java:36)
core | at com.blazebit.persistence.view.processor.annotation.AnnotationMetaAttribute.addAttributeFilter(AnnotationMetaAttribute.java:369)
core | at com.blazebit.persistence.view.processor.annotation.AnnotationMetaAttribute.<init>(AnnotationMetaAttribute.java:206)
core | at com.blazebit.persistence.view.processor.annotation.AnnotationMetaSingularAttribute.<init>(AnnotationMetaSingularAttribute.java:31)
core | at com.blazebit.persistence.view.processor.annotation.MetaAttributeGenerationVisitor.visitDeclared(MetaAttributeGenerationVisitor.java:140)
core | at com.blazebit.persistence.view.processor.annotation.MetaAttributeGenerationVisitor.visitDeclared(MetaAttributeGenerationVisitor.java:44)
core | at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.accept(Type.java:1169)
core | at com.blazebit.persistence.view.processor.annotation.MetaAttributeGenerationVisitor.visitExecutable(MetaAttributeGenerationVisitor.java:157)
core | at com.blazebit.persistence.view.processor.annotation.MetaAttributeGenerationVisitor.visitExecutable(MetaAttributeGenerationVisitor.java:44)
core | at jdk.compiler/com.sun.tools.javac.code.Type$MethodType.accept(Type.java:1535)
core | at com.blazebit.persistence.view.processor.annotation.AnnotationMetaEntityView.<init>(AnnotationMetaEntityView.java:190)
core | at com.blazebit.persistence.view.processor.EntityViewAnnotationProcessor$1.run(EntityViewAnnotationProcessor.java:135)
+9
``` | process | concurrency issues with the annotation processor we are using gradle with java also happened with gradle and java it happened with every bp version since x and above it does happen in the ci less often due to the build cache when we compile our codebase from scratch it is about likely that we get this exception error it seems to happen due to some concurrent attempt to write the same file if the build cache is present and less views have to be generated the issue is less likely core java lang runtimeexception java util concurrent executionexception java lang assertionerror filling jar file opt gradlehome caches modules files com blazebit blaze persistence entity view api snapshot blaze persistence entity view api snapshot jar com blazebit persistence view attributefilterprovider class during directoryfileobject core org gradle api tasks taskexecutionexception execution failed for task core bowl compilejava core at org gradle api internal tasks execution executeactionstaskexecuter lambda executeifvalid executeactionstaskexecuter java core at org gradle internal try failure ifsuccessfulorelse try java core at org gradle api internal tasks execution executeactionstaskexecuter executeifvalid executeactionstaskexecuter java core at org gradle api internal tasks execution executeactionstaskexecuter execute executeactionstaskexecuter java core at org gradle api internal tasks execution cleanupstaleoutputsexecuter execute cleanupstaleoutputsexecuter java core at org gradle api internal tasks execution finalizepropertiestaskexecuter execute finalizepropertiestaskexecuter java core at org gradle api internal tasks execution resolvetaskexecutionmodeexecuter execute resolvetaskexecutionmodeexecuter java core at org gradle api internal tasks execution skiptaskwithnoactionsexecuter execute skiptaskwithnoactionsexecuter java core at org gradle api internal tasks execution skiponlyiftaskexecuter execute skiponlyiftaskexecuter java core at org gradle api internal tasks execution catchexceptiontaskexecuter execute catchexceptiontaskexecuter java core at org gradle api internal tasks execution eventfiringtaskexecuter executetask eventfiringtaskexecuter java core at org gradle api internal tasks execution eventfiringtaskexecuter call eventfiringtaskexecuter java core at org gradle api internal tasks execution eventfiringtaskexecuter call eventfiringtaskexecuter java core at org gradle internal operations defaultbuildoperationrunner callablebuildoperationworker execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner callablebuildoperationworker execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner call defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationexecutor call defaultbuildoperationexecutor java core at org gradle api internal tasks execution eventfiringtaskexecuter execute eventfiringtaskexecuter java core at org gradle execution plan localtasknodeexecutor execute localtasknodeexecutor java core at org gradle execution taskgraph defaulttaskexecutiongraph invokenodeexecutorsaction execute defaulttaskexecutiongraph java core at org gradle execution taskgraph defaulttaskexecutiongraph invokenodeexecutorsaction execute defaulttaskexecutiongraph java core at org gradle execution taskgraph defaulttaskexecutiongraph buildoperationawareexecutionaction execute defaulttaskexecutiongraph java core at org gradle execution taskgraph defaulttaskexecutiongraph buildoperationawareexecutionaction execute defaulttaskexecutiongraph java core at org gradle execution plan defaultplanexecutor executorworker execute defaultplanexecutor java core at org gradle execution plan defaultplanexecutor executorworker run defaultplanexecutor java core at org gradle internal concurrent executorpolicy catchandrecordfailures onexecute executorpolicy java core at org gradle internal concurrent managedexecutorimpl run managedexecutorimpl java core caused by java lang runtimeexception java lang runtimeexception java util concurrent executionexception java lang assertionerror filling jar file opt gradlehome caches modules files com blazebit blaze persistence entity view api snapshot blaze persistence entity view api snapshot jar com blazebit persistence view attributefilterprovider class during directoryfileobject core at jdk compiler com sun tools javac api javactaskimpl invocationhelper javactaskimpl java core at jdk compiler com sun tools javac api javactaskimpl docall javactaskimpl java core at jdk compiler com sun tools javac api javactaskimpl call javactaskimpl java core at org gradle internal compiler java incrementalcompiletask call incrementalcompiletask java core at org gradle api internal tasks compile annotationprocessingcompiletask call annotationprocessingcompiletask java core at org gradle api internal tasks compile resourcecleaningcompilationtask call resourcecleaningcompilationtask java core at org gradle api internal tasks compile jdkjavacompiler execute jdkjavacompiler java core at org gradle api internal tasks compile jdkjavacompiler execute jdkjavacompiler java core at org gradle api internal tasks compile normalizingjavacompiler delegateandhandleerrors normalizingjavacompiler java core at org gradle api internal tasks compile normalizingjavacompiler execute normalizingjavacompiler java core at org gradle api internal tasks compile normalizingjavacompiler execute normalizingjavacompiler java core at org gradle api internal tasks compile annotationprocessordiscoveringcompiler execute annotationprocessordiscoveringcompiler java core at org gradle api internal tasks compile annotationprocessordiscoveringcompiler execute annotationprocessordiscoveringcompiler java core at org gradle api internal tasks compile moduleapplicationnamewritingcompiler execute moduleapplicationnamewritingcompiler java core at org gradle api internal tasks compile moduleapplicationnamewritingcompiler execute moduleapplicationnamewritingcompiler java core at org gradle jvm toolchain internal defaulttoolchainjavacompiler execute defaulttoolchainjavacompiler java core at org gradle api tasks compile javacompile lambda createtoolchaincompiler javacompile java core at org gradle api internal tasks compile cleaningjavacompiler execute cleaningjavacompiler java core at org gradle api internal tasks compile incremental incrementalcompilerfactory lambda createrebuildallcompiler incrementalcompilerfactory java core at org gradle api internal tasks compile incremental selectivecompiler execute selectivecompiler java core at org gradle api internal tasks compile incremental selectivecompiler execute selectivecompiler java core at org gradle api internal tasks compile incremental incrementalresultstoringcompiler execute incrementalresultstoringcompiler java core at org gradle api internal tasks compile incremental incrementalresultstoringcompiler execute incrementalresultstoringcompiler java core at org gradle api internal tasks compile compilejavabuildoperationreportingcompiler call compilejavabuildoperationreportingcompiler java core at org gradle api internal tasks compile compilejavabuildoperationreportingcompiler call compilejavabuildoperationreportingcompiler java core at org gradle internal operations defaultbuildoperationrunner callablebuildoperationworker execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner callablebuildoperationworker execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner call defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationexecutor call defaultbuildoperationexecutor java core at org gradle api internal tasks compile compilejavabuildoperationreportingcompiler execute compilejavabuildoperationreportingcompiler java core at org gradle api tasks compile javacompile performcompilation javacompile java core at org gradle api tasks compile javacompile performincrementalcompilation javacompile java core at org gradle api tasks compile javacompile compile javacompile java core at java base jdk internal reflect nativemethodaccessorimpl native method core at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java core at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java core at org gradle internal reflect javamethod invoke javamethod java core at org gradle api internal project taskfactory incrementalinputstaskaction doexecute incrementalinputstaskaction java core at org gradle api internal project taskfactory standardtaskaction execute standardtaskaction java core at org gradle api internal project taskfactory abstractincrementaltaskaction execute abstractincrementaltaskaction java core at org gradle api internal project taskfactory standardtaskaction execute standardtaskaction java core at org gradle api internal tasks execution taskexecution run taskexecution java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner run defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationexecutor run defaultbuildoperationexecutor java core at org gradle api internal tasks execution taskexecution executeaction taskexecution java core at org gradle api internal tasks execution taskexecution executeactions taskexecution java core at org gradle api internal tasks execution taskexecution executewithpreviousoutputfiles taskexecution java core at org gradle api internal tasks execution taskexecution execute taskexecution java core at org gradle internal execution steps executestep executeinternal executestep java core at org gradle internal execution steps executestep access executestep java core at org gradle internal execution steps executestep call executestep java core at org gradle internal execution steps executestep call executestep java core at org gradle internal operations defaultbuildoperationrunner callablebuildoperationworker execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner callablebuildoperationworker execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner call defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationexecutor call defaultbuildoperationexecutor java core at org gradle internal execution steps executestep execute executestep java core at org gradle internal execution steps executestep execute executestep java core at org gradle internal execution steps removepreviousoutputsstep execute removepreviousoutputsstep java core at org gradle internal execution steps removepreviousoutputsstep execute removepreviousoutputsstep java core at org gradle internal execution steps cancelexecutionstep execute cancelexecutionstep java core at org gradle internal execution steps timeoutstep executewithouttimeout timeoutstep java core at org gradle internal execution steps timeoutstep execute timeoutstep java core at org gradle internal execution steps createoutputsstep execute createoutputsstep java core at org gradle internal execution steps createoutputsstep execute createoutputsstep java core at org gradle internal execution steps capturestateafterexecutionstep executedelegatebroadcastingchanges capturestateafterexecutionstep java core at org gradle internal execution steps capturestateafterexecutionstep execute capturestateafterexecutionstep java core at org gradle internal execution steps capturestateafterexecutionstep execute capturestateafterexecutionstep java core at org gradle internal execution steps resolveinputchangesstep execute resolveinputchangesstep java core at org gradle internal execution steps resolveinputchangesstep execute resolveinputchangesstep java core at org gradle internal execution steps buildcachestep executewithoutcache buildcachestep java core at org gradle internal execution steps buildcachestep executeandstoreincache buildcachestep java core at org gradle internal execution steps buildcachestep lambda executewithcache buildcachestep java core at org gradle internal execution steps buildcachestep lambda executewithcache buildcachestep java core at org gradle internal try success map try java core at org gradle internal execution steps buildcachestep executewithcache buildcachestep java core at org gradle internal execution steps buildcachestep lambda execute buildcachestep java core at org gradle internal either left fold either java core at org gradle internal execution caching cachingstate fold cachingstate java core at org gradle internal execution steps buildcachestep execute buildcachestep java core at org gradle internal execution steps buildcachestep execute buildcachestep java core at org gradle internal execution steps storeexecutionstatestep execute storeexecutionstatestep java core at org gradle internal execution steps storeexecutionstatestep execute storeexecutionstatestep java core at org gradle internal execution steps recordoutputsstep execute recordoutputsstep java core at org gradle internal execution steps recordoutputsstep execute recordoutputsstep java core at org gradle internal execution steps skipuptodatestep executebecause skipuptodatestep java core at org gradle internal execution steps skipuptodatestep lambda execute skipuptodatestep java core at org gradle internal execution steps skipuptodatestep execute skipuptodatestep java core at org gradle internal execution steps skipuptodatestep execute skipuptodatestep java core at org gradle internal execution steps resolvechangesstep execute resolvechangesstep java core at org gradle internal execution steps resolvechangesstep execute resolvechangesstep java core at org gradle internal execution steps legacy marksnapshottinginputsfinishedstep execute marksnapshottinginputsfinishedstep java core at org gradle internal execution steps legacy marksnapshottinginputsfinishedstep execute marksnapshottinginputsfinishedstep java core at org gradle internal execution steps resolvecachingstatestep execute resolvecachingstatestep java core at org gradle internal execution steps resolvecachingstatestep execute resolvecachingstatestep java core at org gradle internal execution steps validatestep execute validatestep java core at org gradle internal execution steps validatestep execute validatestep java core at org gradle internal execution steps capturestatebeforeexecutionstep execute capturestatebeforeexecutionstep java core at org gradle internal execution steps capturestatebeforeexecutionstep execute capturestatebeforeexecutionstep java core at org gradle internal execution steps skipemptyworkstep executewithnoemptysources skipemptyworkstep java core at org gradle internal execution steps skipemptyworkstep executewithnoemptysources skipemptyworkstep java core at org gradle internal execution steps skipemptyworkstep execute skipemptyworkstep java core at org gradle internal execution steps skipemptyworkstep execute skipemptyworkstep java core at org gradle internal execution steps removeuntrackedexecutionstatestep execute removeuntrackedexecutionstatestep java core at org gradle internal execution steps removeuntrackedexecutionstatestep execute removeuntrackedexecutionstatestep java core at org gradle internal execution steps legacy marksnapshottinginputsstartedstep execute marksnapshottinginputsstartedstep java core at org gradle internal execution steps loadpreviousexecutionstatestep execute loadpreviousexecutionstatestep java core at org gradle internal execution steps loadpreviousexecutionstatestep execute loadpreviousexecutionstatestep java core at org gradle internal execution steps assignworkspacestep lambda execute assignworkspacestep java core at org gradle api internal tasks execution taskexecution withworkspace taskexecution java core at org gradle internal execution steps assignworkspacestep execute assignworkspacestep java core at org gradle internal execution steps assignworkspacestep execute assignworkspacestep java core at org gradle internal execution steps identitycachestep execute identitycachestep java core at org gradle internal execution steps identitycachestep execute identitycachestep java core at org gradle internal execution steps identifystep execute identifystep java core at org gradle internal execution steps identifystep execute identifystep java core at org gradle internal execution impl defaultexecutionengine execute defaultexecutionengine java core at org gradle api internal tasks execution executeactionstaskexecuter executeifvalid executeactionstaskexecuter java core at org gradle api internal tasks execution executeactionstaskexecuter execute executeactionstaskexecuter java core at org gradle api internal tasks execution cleanupstaleoutputsexecuter execute cleanupstaleoutputsexecuter java core at org gradle api internal tasks execution finalizepropertiestaskexecuter execute finalizepropertiestaskexecuter java core at org gradle api internal tasks execution resolvetaskexecutionmodeexecuter execute resolvetaskexecutionmodeexecuter java core at org gradle api internal tasks execution skiptaskwithnoactionsexecuter execute skiptaskwithnoactionsexecuter java core at org gradle api internal tasks execution skiponlyiftaskexecuter execute skiponlyiftaskexecuter java core at org gradle api internal tasks execution catchexceptiontaskexecuter execute catchexceptiontaskexecuter java core at org gradle api internal tasks execution eventfiringtaskexecuter executetask eventfiringtaskexecuter java core at org gradle api internal tasks execution eventfiringtaskexecuter call eventfiringtaskexecuter java core at org gradle api internal tasks execution eventfiringtaskexecuter call eventfiringtaskexecuter java core at org gradle internal operations defaultbuildoperationrunner callablebuildoperationworker execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner callablebuildoperationworker execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner execute defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationrunner call defaultbuildoperationrunner java core at org gradle internal operations defaultbuildoperationexecutor call defaultbuildoperationexecutor java core at org gradle api internal tasks execution eventfiringtaskexecuter execute eventfiringtaskexecuter java core at org gradle execution plan localtasknodeexecutor execute localtasknodeexecutor java core at org gradle execution taskgraph defaulttaskexecutiongraph invokenodeexecutorsaction execute defaulttaskexecutiongraph java core at org gradle execution taskgraph defaulttaskexecutiongraph invokenodeexecutorsaction execute defaulttaskexecutiongraph java core at org gradle execution taskgraph defaulttaskexecutiongraph buildoperationawareexecutionaction execute defaulttaskexecutiongraph java core at org gradle execution taskgraph defaulttaskexecutiongraph buildoperationawareexecutionaction execute defaulttaskexecutiongraph java core at org gradle execution plan defaultplanexecutor executorworker execute defaultplanexecutor java core at org gradle execution plan defaultplanexecutor executorworker run defaultplanexecutor java core at org gradle internal concurrent executorpolicy catchandrecordfailures onexecute executorpolicy java core at org gradle internal concurrent managedexecutorimpl run managedexecutorimpl java core caused by java lang runtimeexception java util concurrent executionexception java lang assertionerror filling jar file opt gradlehome caches modules files com blazebit blaze persistence entity view api snapshot blaze persistence entity view api snapshot jar com blazebit persistence view attributefilterprovider class during directoryfileobject core at com blazebit persistence view processor entityviewannotationprocessor await entityviewannotationprocessor java core at com blazebit persistence view processor entityviewannotationprocessor execute entityviewannotationprocessor java core at com blazebit persistence view processor entityviewannotationprocessor process entityviewannotationprocessor java core at org gradle api internal tasks compile processing delegatingprocessor process delegatingprocessor java core at org gradle api internal tasks compile processing isolatingprocessor process isolatingprocessor java core at org gradle api internal tasks compile processing delegatingprocessor process delegatingprocessor java core at org gradle api internal tasks compile processing timetrackingprocessor access timetrackingprocessor java core at org gradle api internal tasks compile processing timetrackingprocessor create timetrackingprocessor java core at org gradle api internal tasks compile processing timetrackingprocessor create timetrackingprocessor java core at org gradle api internal tasks compile processing timetrackingprocessor track timetrackingprocessor java core at org gradle api internal tasks compile processing timetrackingprocessor process timetrackingprocessor java core at jdk compiler com sun tools javac processing javacprocessingenvironment callprocessor javacprocessingenvironment java core at jdk compiler com sun tools javac processing javacprocessingenvironment discoverandrunprocs javacprocessingenvironment java core at jdk compiler com sun tools javac processing javacprocessingenvironment round run javacprocessingenvironment java core at jdk compiler com sun tools javac processing javacprocessingenvironment doprocessing javacprocessingenvironment java core at jdk compiler com sun tools javac main javacompiler processannotations javacompiler java core at jdk compiler com sun tools javac main javacompiler compile javacompiler java core at jdk compiler com sun tools javac api javactaskimpl lambda docall javactaskimpl java core at jdk compiler com sun tools javac api javactaskimpl invocationhelper javactaskimpl java core more core caused by java util concurrent executionexception java lang assertionerror filling jar file opt gradlehome caches modules files com blazebit blaze persistence entity view api snapshot blaze persistence entity view api snapshot jar com blazebit persistence view attributefilterprovider class during directoryfileobject core at com blazebit persistence view processor entityviewannotationprocessor await entityviewannotationprocessor java core more core caused by java lang assertionerror filling jar file opt gradlehome caches modules files com blazebit blaze persistence entity view api snapshot blaze persistence entity view api snapshot jar com blazebit persistence view attributefilterprovider class during directoryfileobject core at jdk compiler com sun tools javac util assert error assert java core at jdk compiler com sun tools javac code classfinder fillin classfinder java core at jdk compiler com sun tools javac code classfinder complete classfinder java core at jdk compiler com sun tools javac code symbol complete symbol java core at jdk compiler com sun tools javac code symbol classsymbol complete symbol java core at jdk compiler com sun tools javac model javacelements nametosymbol javacelements java core at jdk compiler com sun tools javac model javacelements lambda unboundnametosymbol javacelements java core at jdk compiler com sun tools javac model javacelements unboundnametosymbol javacelements java core at jdk compiler com sun tools javac model javacelements dogetelement javacelements java core core deprecated gradle features were used in this build making it incompatible with gradle core core you can use warning mode all to show the individual deprecation warnings and determine if they come from your own scripts or plugins core core see core actionable tasks executed up to date core at jdk compiler com sun tools javac model javacelements dogettypeelement javacelements java core at jdk compiler com sun tools javac model javacelements gettypeelement javacelements java core at jdk compiler com sun tools javac model javacelements gettypeelement javacelements java core at com blazebit persistence view processor context gettypeelement context java core at com blazebit persistence view processor attributefilter attributefilter java core at com blazebit persistence view processor annotation annotationmetaattribute addattributefilter annotationmetaattribute java core at com blazebit persistence view processor annotation annotationmetaattribute annotationmetaattribute java core at com blazebit persistence view processor annotation annotationmetasingularattribute annotationmetasingularattribute java core at com blazebit persistence view processor annotation metaattributegenerationvisitor visitdeclared metaattributegenerationvisitor java core at com blazebit persistence view processor annotation metaattributegenerationvisitor visitdeclared metaattributegenerationvisitor java core at jdk compiler com sun tools javac code type classtype accept type java core at com blazebit persistence view processor annotation metaattributegenerationvisitor visitexecutable metaattributegenerationvisitor java core at com blazebit persistence view processor annotation metaattributegenerationvisitor visitexecutable metaattributegenerationvisitor java core at jdk compiler com sun tools javac code type methodtype accept type java core at com blazebit persistence view processor annotation annotationmetaentityview annotationmetaentityview java core at com blazebit persistence view processor entityviewannotationprocessor run entityviewannotationprocessor java | 1 |
11,772 | 14,601,077,098 | IssuesEvent | 2020-12-21 08:04:16 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PM] [Dev] Apps > App_GCPMS001 record is displayed even though there are no studies created with this app ID in SB | Bug P1 Participant manager Process: Dev Process: Tested dev | App_GCPMS001 record is displayed even though there are no studies created with this app ID in SB

| 2.0 | [PM] [Dev] Apps > App_GCPMS001 record is displayed even though there are no studies created with this app ID in SB - App_GCPMS001 record is displayed even though there are no studies created with this app ID in SB

| process | apps app record is displayed even though there are no studies created with this app id in sb app record is displayed even though there are no studies created with this app id in sb | 1 |
655,820 | 21,710,566,523 | IssuesEvent | 2022-05-10 13:34:14 | hermeznetwork/bridge-ui | https://api.github.com/repos/hermeznetwork/bridge-ui | opened | Make explorer env variables optional | priority: high type: enhancement | We need to make both `REACT_APP_ETHEREUM_EXPLORER_URL` and `REACT_APP_POLYGON_EXPLORER_URL` env variables optional. Backend team is going to use the bridge locally as a testing tool and we need to be able to run it without these env vars. | 1.0 | Make explorer env variables optional - We need to make both `REACT_APP_ETHEREUM_EXPLORER_URL` and `REACT_APP_POLYGON_EXPLORER_URL` env variables optional. Backend team is going to use the bridge locally as a testing tool and we need to be able to run it without these env vars. | non_process | make explorer env variables optional we need to make both react app ethereum explorer url and react app polygon explorer url env variables optional backend team is going to use the bridge locally as a testing tool and we need to be able to run it without these env vars | 0 |
19,244 | 25,406,599,345 | IssuesEvent | 2022-11-22 15:43:14 | pydata/pydata-sphinx-theme | https://api.github.com/repos/pydata/pydata-sphinx-theme | closed | clean up the branches | maintenance team-process | There are currently 22 branches in this repository, most of them are legacy branches that have not been used for years. Should we make some cleaning ? | 1.0 | clean up the branches - There are currently 22 branches in this repository, most of them are legacy branches that have not been used for years. Should we make some cleaning ? | process | clean up the branches there are currently branches in this repository most of them are legacy branches that have not been used for years should we make some cleaning | 1 |
56,910 | 13,941,916,026 | IssuesEvent | 2020-10-22 20:10:21 | googleapis/nodejs-video-intelligence | https://api.github.com/repos/googleapis/nodejs-video-intelligence | closed | analyzing faces in video: should identify faces in a local file failed | api: videointelligence buildcop: flaky buildcop: issue priority: p2 type: bug | This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: fd2a938d45c1d1b5fbd4800723771c5e9de3f8b0
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/fdc6e06b-4fff-4c0f-9b65-c9075dbed0ae), [Sponge](http://sponge2/fdc6e06b-4fff-4c0f-9b65-c9075dbed0ae)
status: failed
<details><summary>Test output</summary><br><pre>expected 'Waiting for operation to complete...\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\n' to match /Attribute/
AssertionError: expected 'Waiting for operation to complete...\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\n' to match /Attribute/
at Context.<anonymous> (system-test/analyze_face_detection.test.js:29:12)
at processImmediate (internal/timers.js:456:21)</pre></details> | 2.0 | analyzing faces in video: should identify faces in a local file failed - This test failed!
To configure my behavior, see [the Build Cop Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/buildcop).
If I'm commenting on this issue too often, add the `buildcop: quiet` label and
I will stop commenting.
---
commit: fd2a938d45c1d1b5fbd4800723771c5e9de3f8b0
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/fdc6e06b-4fff-4c0f-9b65-c9075dbed0ae), [Sponge](http://sponge2/fdc6e06b-4fff-4c0f-9b65-c9075dbed0ae)
status: failed
<details><summary>Test output</summary><br><pre>expected 'Waiting for operation to complete...\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\n' to match /Attribute/
AssertionError: expected 'Waiting for operation to complete...\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\nFace detected:\n' to match /Attribute/
at Context.<anonymous> (system-test/analyze_face_detection.test.js:29:12)
at processImmediate (internal/timers.js:456:21)</pre></details> | non_process | analyzing faces in video should identify faces in a local file failed this test failed to configure my behavior see if i m commenting on this issue too often add the buildcop quiet label and i will stop commenting commit buildurl status failed test output expected waiting for operation to complete nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected n to match attribute assertionerror expected waiting for operation to complete nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected nface detected n to match attribute at context system test analyze face detection test js at processimmediate internal timers js | 0 |
1,859 | 4,682,374,204 | IssuesEvent | 2016-10-09 07:53:23 | sysown/proxysql | https://api.github.com/repos/sysown/proxysql | closed | rule with only digest set always matches on everything | bug QUERY PROCESSOR | I have a rule with only digest value set, and regarding the documentation I expected it will only match if its the identical digest, but seems to match on any query.
> digest - match queries with a specific digest, as returned by stats_mysql_query_digest.digest
```
mysql> select * from runtime_mysql_query_rules where rule_id=100\G
*************************** 1. row ***************************
rule_id: 100
active: 1
username: demo_user
schemaname: demo_table
flagIN: 0
client_addr: NULL
proxy_addr: NULL
proxy_port: NULL
digest: 0xDB3A841EF5443C35
match_digest: NULL
match_pattern: NULL
negate_match_pattern: 0
flagOUT: NULL
replace_pattern: NULL
destination_hostgroup: NULL
cache_ttl: NULL
reconnect: NULL
timeout: NULL
retries: NULL
delay: NULL
mirror_flagOUT: NULL
mirror_hostgroup: NULL
error_msg: NULL
log: NULL
apply: 1
comment: NULL
```
```
mysql> SELECT * FROM stats.stats_mysql_query_rules;
+---------+------+
| rule_id | hits |
+---------+------+
| 100 | 337 |
| 205 | 0 |
| 206 | 0 |
| 207 | 0 |
| 208 | 0 |
| 209 | 0 |
| 210 | 0 |
| 211 | 0 |
+---------+------+
``` | 1.0 | rule with only digest set always matches on everything - I have a rule with only digest value set, and regarding the documentation I expected it will only match if its the identical digest, but seems to match on any query.
> digest - match queries with a specific digest, as returned by stats_mysql_query_digest.digest
```
mysql> select * from runtime_mysql_query_rules where rule_id=100\G
*************************** 1. row ***************************
rule_id: 100
active: 1
username: demo_user
schemaname: demo_table
flagIN: 0
client_addr: NULL
proxy_addr: NULL
proxy_port: NULL
digest: 0xDB3A841EF5443C35
match_digest: NULL
match_pattern: NULL
negate_match_pattern: 0
flagOUT: NULL
replace_pattern: NULL
destination_hostgroup: NULL
cache_ttl: NULL
reconnect: NULL
timeout: NULL
retries: NULL
delay: NULL
mirror_flagOUT: NULL
mirror_hostgroup: NULL
error_msg: NULL
log: NULL
apply: 1
comment: NULL
```
```
mysql> SELECT * FROM stats.stats_mysql_query_rules;
+---------+------+
| rule_id | hits |
+---------+------+
| 100 | 337 |
| 205 | 0 |
| 206 | 0 |
| 207 | 0 |
| 208 | 0 |
| 209 | 0 |
| 210 | 0 |
| 211 | 0 |
+---------+------+
``` | process | rule with only digest set always matches on everything i have a rule with only digest value set and regarding the documentation i expected it will only match if its the identical digest but seems to match on any query digest match queries with a specific digest as returned by stats mysql query digest digest mysql select from runtime mysql query rules where rule id g row rule id active username demo user schemaname demo table flagin client addr null proxy addr null proxy port null digest match digest null match pattern null negate match pattern flagout null replace pattern null destination hostgroup null cache ttl null reconnect null timeout null retries null delay null mirror flagout null mirror hostgroup null error msg null log null apply comment null mysql select from stats stats mysql query rules rule id hits | 1 |
20,675 | 27,342,315,853 | IssuesEvent | 2023-02-26 23:03:30 | JuliaParallel/Dagger.jl | https://api.github.com/repos/JuliaParallel/Dagger.jl | opened | Add more built-in thread-based processor types | enhancement processors | - `ThreadProc` -> `ThreadLockedProc` (no longer default enabled)
- `ThreadProc`: Runs on any thread and allows task migration (default enabled)
- `NUMAPinnedProc`: Pinned to a given NUMA domain, possibly allocated manually (not default enabled) | 1.0 | Add more built-in thread-based processor types - - `ThreadProc` -> `ThreadLockedProc` (no longer default enabled)
- `ThreadProc`: Runs on any thread and allows task migration (default enabled)
- `NUMAPinnedProc`: Pinned to a given NUMA domain, possibly allocated manually (not default enabled) | process | add more built in thread based processor types threadproc threadlockedproc no longer default enabled threadproc runs on any thread and allows task migration default enabled numapinnedproc pinned to a given numa domain possibly allocated manually not default enabled | 1 |
7,298 | 10,442,910,788 | IssuesEvent | 2019-09-18 13:57:21 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | kernel message: goaccess trap divide error | bug log-processing on-disk | When running goaccess 1.3 daemonized, I ran in this issue today.
The web page with realtime stats was empty, on disk the realtime html and database look good.
In the syslog I see :
traps: goaccess[4010] trap divide error ip:559ca6039673 sp:7fff900f9170 error:0 in goaccess[559ca6020000+97000]
traps: goaccess[4105] trap divide error ip:5578b589e673 sp:7fff666347c0 error:0 in goaccess[5578b5885000+97000]
traps: goaccess[4161] trap divide error ip:56081b72a673 sp:7ffe64b5f5d0 error:0 in goaccess[56081b711000+97000]
Restarting goaccess didn't fix the problem.
I had to clean the database and realtime html file to start from scratch.
No idea on how to reproduce the issue (keep it running ?) | 1.0 | kernel message: goaccess trap divide error - When running goaccess 1.3 daemonized, I ran in this issue today.
The web page with realtime stats was empty, on disk the realtime html and database look good.
In the syslog I see :
traps: goaccess[4010] trap divide error ip:559ca6039673 sp:7fff900f9170 error:0 in goaccess[559ca6020000+97000]
traps: goaccess[4105] trap divide error ip:5578b589e673 sp:7fff666347c0 error:0 in goaccess[5578b5885000+97000]
traps: goaccess[4161] trap divide error ip:56081b72a673 sp:7ffe64b5f5d0 error:0 in goaccess[56081b711000+97000]
Restarting goaccess didn't fix the problem.
I had to clean the database and realtime html file to start from scratch.
No idea on how to reproduce the issue (keep it running ?) | process | kernel message goaccess trap divide error when running goaccess daemonized i ran in this issue today the web page with realtime stats was empty on disk the realtime html and database look good in the syslog i see traps goaccess trap divide error ip sp error in goaccess traps goaccess trap divide error ip sp error in goaccess traps goaccess trap divide error ip sp error in goaccess restarting goaccess didn t fix the problem i had to clean the database and realtime html file to start from scratch no idea on how to reproduce the issue keep it running | 1 |
20,028 | 26,511,071,697 | IssuesEvent | 2023-01-18 17:08:04 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Use fixup commits during code review | RFC Process | ## Introduction
During code review, when PR authors make changes, they should upload fix-up commits containing those changes instead of changing the existing commits and force-pushing. This will streamline reviewer tasks. Once the review is nominally complete, the reviewer should perform a squash rebase and force-push as usual for merging.
### Problem description
Code reviews may go through several rounds of review, in which the author uploads new code to address previous comments, typically by amending their previous commits. Due to limitations of GitHub, it may not be immediately obvious to reviewers which parts of the uploaded code are new to them, so they need to review any files that have changed, usually looking at the entire delta of the PR again. This is particularly burdensome on large PRs with several active reviewers.
### Proposed change
Authors should avoid pushing amended commits during code review. Instead, they should push fix-up commits addressing reviewer comments. When reviewers are satisfied, authors should squash in the fix-up commits and force-push prior to final approval and merging.
## Detailed RFC
### Background
Git's interactive rebase supports `squash` and `fixup` operations on commits. `squash` combines two or more commits into one and combines their commit messages into one. `fixup` combines the commits into one but only retains the first commit message. "Fix-up commits" or "squash commits" are commits that a user intends to `fixup` or `squash` later, typically indicating the base commit onto which the new commit will be fixed up or squashed, e.g.
```
deadbeef fixup: Declare API # Commit message: Adds comments
2468abab Define API
1234abcd Declare API
```
### Proposed change (Detailed)
* During code review, authors will avoid pushing amended commits and instead push fix-up commits containing their new changes.
* Reviewers will review new commits until they are satisfied that the new commits address the concerns they raised about the previous commits.
* When reviewers have satisfied their concerns, authors will squash the previously created fix-up commits and force-push, retaining the original base commit with `git rebase --keep-base`.
* Reviewers will verify that the author did not make any significant changes during this operation, which should be straightforward in GitHub's compare UI. Reviewers will also verify that the overall commit structure is still to their liking. If those things aren't true, review will continue.
* Reviewers will approve and merge the PR.
### Dependencies
I think the blast radius of this is pretty minimal. We could try it on a few PRs and see how it goes before making any policy changes.
### Concerns and Unresolved Questions
* Git has facilities to automate `fixup` and `squash` operations like so:
```
git commit # Original commit 1234, "Add a feature"
# Fix formatting
git commit --fixup=1234 # Create a fix-up commit
# Now there are two commits
git rebase --autosquash
# Now there is just one commit, "Add a feature," with correct formatting
```
These operations create and rely on commit headlines with the format `fixup: Previous headline` or `squash: Previous headline`.
Authors could use this feature to automate some of the steps. However, this would get in the way of using `--autosquash` locally on that PR for their own purposes. I think it should be fine if authors use `FIXUP: Previous message` or similar for the fix-up commits they want to upload for review. The reviewer experience should be the same either way.
* This process is probably overkill for small PRs of a few lines that are trivial to re-review. I think fix-ups should be _recommended_ for large PRs.
## Alternatives
### Habitually not rebasing
This proposal primarily addresses weaknesses of GitHub's UI for before-after comparison of force pushes. One annoying behavior is that it will show diffs for every file in the repository, not just those changed by the commits. This makes it basically unusable if the author has rebased the PR. Authors could avoid this by only using `git rebase --keep-base` during the review. Reviewers would then be able to use GitHub to see just the relatively small changes that the author made in response to review comments.
However, this would retain other weaknesses of the GitHub push comparison UI:
* It does not show in which commits the new changes will ultimately land.
* It also does not allow the reviewer to comment in the comparison view, so they need to navigate to a different page with more code and find the code they were looking at previously.
* It only works for the delta introduced by the latest push. (Actually, I don't know if this is true. I've seen some PRs with multiple compare buttons and some with only one.)
| 1.0 | Use fixup commits during code review - ## Introduction
During code review, when PR authors make changes, they should upload fix-up commits containing those changes instead of changing the existing commits and force-pushing. This will streamline reviewer tasks. Once the review is nominally complete, the reviewer should perform a squash rebase and force-push as usual for merging.
### Problem description
Code reviews may go through several rounds of review, in which the author uploads new code to address previous comments, typically by amending their previous commits. Due to limitations of GitHub, it may not be immediately obvious to reviewers which parts of the uploaded code are new to them, so they need to review any files that have changed, usually looking at the entire delta of the PR again. This is particularly burdensome on large PRs with several active reviewers.
### Proposed change
Authors should avoid pushing amended commits during code review. Instead, they should push fix-up commits addressing reviewer comments. When reviewers are satisfied, authors should squash in the fix-up commits and force-push prior to final approval and merging.
## Detailed RFC
### Background
Git's interactive rebase supports `squash` and `fixup` operations on commits. `squash` combines two or more commits into one and combines their commit messages into one. `fixup` combines the commits into one but only retains the first commit message. "Fix-up commits" or "squash commits" are commits that a user intends to `fixup` or `squash` later, typically indicating the base commit onto which the new commit will be fixed up or squashed, e.g.
```
deadbeef fixup: Declare API # Commit message: Adds comments
2468abab Define API
1234abcd Declare API
```
### Proposed change (Detailed)
* During code review, authors will avoid pushing amended commits and instead push fix-up commits containing their new changes.
* Reviewers will review new commits until they are satisfied that the new commits address the concerns they raised about the previous commits.
* When reviewers have satisfied their concerns, authors will squash the previously created fix-up commits and force-push, retaining the original base commit with `git rebase --keep-base`.
* Reviewers will verify that the author did not make any significant changes during this operation, which should be straightforward in GitHub's compare UI. Reviewers will also verify that the overall commit structure is still to their liking. If those things aren't true, review will continue.
* Reviewers will approve and merge the PR.
### Dependencies
I think the blast radius of this is pretty minimal. We could try it on a few PRs and see how it goes before making any policy changes.
### Concerns and Unresolved Questions
* Git has facilities to automate `fixup` and `squash` operations like so:
```
git commit # Original commit 1234, "Add a feature"
# Fix formatting
git commit --fixup=1234 # Create a fix-up commit
# Now there are two commits
git rebase --autosquash
# Now there is just one commit, "Add a feature," with correct formatting
```
These operations create and rely on commit headlines with the format `fixup: Previous headline` or `squash: Previous headline`.
Authors could use this feature to automate some of the steps. However, this would get in the way of using `--autosquash` locally on that PR for their own purposes. I think it should be fine if authors use `FIXUP: Previous message` or similar for the fix-up commits they want to upload for review. The reviewer experience should be the same either way.
* This process is probably overkill for small PRs of a few lines that are trivial to re-review. I think fix-ups should be _recommended_ for large PRs.
## Alternatives
### Habitually not rebasing
This proposal primarily addresses weaknesses of GitHub's UI for before-after comparison of force pushes. One annoying behavior is that it will show diffs for every file in the repository, not just those changed by the commits. This makes it basically unusable if the author has rebased the PR. Authors could avoid this by only using `git rebase --keep-base` during the review. Reviewers would then be able to use GitHub to see just the relatively small changes that the author made in response to review comments.
However, this would retain other weaknesses of the GitHub push comparison UI:
* It does not show in which commits the new changes will ultimately land.
* It also does not allow the reviewer to comment in the comparison view, so they need to navigate to a different page with more code and find the code they were looking at previously.
* It only works for the delta introduced by the latest push. (Actually, I don't know if this is true. I've seen some PRs with multiple compare buttons and some with only one.)
| process | use fixup commits during code review introduction during code review when pr authors make changes they should upload fix up commits containing those changes instead of changing the existing commits and force pushing this will streamline reviewer tasks once the review is nominally complete the reviewer should perform a squash rebase and force push as usual for merging problem description code reviews may go through several rounds of review in which the author uploads new code to address previous comments typically by amending their previous commits due to limitations of github it may not be immediately obvious to reviewers which parts of the uploaded code are new to them so they need to review any files that have changed usually looking at the entire delta of the pr again this is particularly burdensome on large prs with several active reviewers proposed change authors should avoid pushing amended commits during code review instead they should push fix up commits addressing reviewer comments when reviewers are satisfied authors should squash in the fix up commits and force push prior to final approval and merging detailed rfc background git s interactive rebase supports squash and fixup operations on commits squash combines two or more commits into one and combines their commit messages into one fixup combines the commits into one but only retains the first commit message fix up commits or squash commits are commits that a user intends to fixup or squash later typically indicating the base commit onto which the new commit will be fixed up or squashed e g deadbeef fixup declare api commit message adds comments define api declare api proposed change detailed during code review authors will avoid pushing amended commits and instead push fix up commits containing their new changes reviewers will review new commits until they are satisfied that the new commits address the concerns they raised about the previous commits when reviewers have satisfied their concerns authors will squash the previously created fix up commits and force push retaining the original base commit with git rebase keep base reviewers will verify that the author did not make any significant changes during this operation which should be straightforward in github s compare ui reviewers will also verify that the overall commit structure is still to their liking if those things aren t true review will continue reviewers will approve and merge the pr dependencies i think the blast radius of this is pretty minimal we could try it on a few prs and see how it goes before making any policy changes concerns and unresolved questions git has facilities to automate fixup and squash operations like so git commit original commit add a feature fix formatting git commit fixup create a fix up commit now there are two commits git rebase autosquash now there is just one commit add a feature with correct formatting these operations create and rely on commit headlines with the format fixup previous headline or squash previous headline authors could use this feature to automate some of the steps however this would get in the way of using autosquash locally on that pr for their own purposes i think it should be fine if authors use fixup previous message or similar for the fix up commits they want to upload for review the reviewer experience should be the same either way this process is probably overkill for small prs of a few lines that are trivial to re review i think fix ups should be recommended for large prs alternatives habitually not rebasing this proposal primarily addresses weaknesses of github s ui for before after comparison of force pushes one annoying behavior is that it will show diffs for every file in the repository not just those changed by the commits this makes it basically unusable if the author has rebased the pr authors could avoid this by only using git rebase keep base during the review reviewers would then be able to use github to see just the relatively small changes that the author made in response to review comments however this would retain other weaknesses of the github push comparison ui it does not show in which commits the new changes will ultimately land it also does not allow the reviewer to comment in the comparison view so they need to navigate to a different page with more code and find the code they were looking at previously it only works for the delta introduced by the latest push actually i don t know if this is true i ve seen some prs with multiple compare buttons and some with only one | 1 |
492,251 | 14,199,270,028 | IssuesEvent | 2020-11-16 01:50:21 | ita-social-projects/horondi_client_fe | https://api.github.com/repos/ita-social-projects/horondi_client_fe | closed | [About us] There are no images on the page | UI bug priority: medium severity: trivial | **Environment**: Windows X, Google Chrome: Version 86.0.4240.75 (Official Build) (64-bit)
**Reproducible**: always.
**Build found**: 23/Oct/20 | DB: 23
**Preconditions**:
Run HORONDI localhost
**Steps to reproduce**:
1. Go to HORONDI homepage (http://localhost:3000/)
2. Scroll down to the footer
3. Click on 'About us'
**Actual result**:
There are no images on the page (see About-us.png) 
**Expected result**:
Images are present on the page
User story and test case links
User story #42 As a user I want to see Footer
[Test case](https://jira.softserve.academy/browse/PAH-29)
| 1.0 | [About us] There are no images on the page - **Environment**: Windows X, Google Chrome: Version 86.0.4240.75 (Official Build) (64-bit)
**Reproducible**: always.
**Build found**: 23/Oct/20 | DB: 23
**Preconditions**:
Run HORONDI localhost
**Steps to reproduce**:
1. Go to HORONDI homepage (http://localhost:3000/)
2. Scroll down to the footer
3. Click on 'About us'
**Actual result**:
There are no images on the page (see About-us.png) 
**Expected result**:
Images are present on the page
User story and test case links
User story #42 As a user I want to see Footer
[Test case](https://jira.softserve.academy/browse/PAH-29)
| non_process | there are no images on the page environment windows x google chrome version official build bit reproducible always build found oct db preconditions run horondi localhost steps to reproduce go to horondi homepage scroll down to the footer click on about us actual result there are no images on the page see about us png expected result images are present on the page user story and test case links user story as a user i want to see footer | 0 |
512,130 | 14,888,660,079 | IssuesEvent | 2021-01-20 20:11:32 | magento/magento2 | https://api.github.com/repos/magento/magento2 | closed | Vimeo Product Video Not Found | Issue: Confirmed Priority: P2 Progress: done Reproduced on 2.4.x | <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Magento 2.4.0 and Magento 2.4.1
-->
1. It was previoulys working, tested in Magento 2.4.0.
### Steps to reproduce (*)
1. So upgrade to 2.4.1. Do not upgrade and remain in 2.4.0 in another cloned enviroment
2. Go to product page
3. Insert a product video in product page using the regular insert video button. It previously worked.
4. Few days later try to do it again. Vimeo returns 404
5. Try to upload to another Vimeo account to check if a setting may be blocking. No, it returns 404
6. curl -i https://vimeo.com/videoId. It returns 200 and a response with json and ajax call
7. Try to get in browser Vimeo dowload a file a does not print information on client
8. Try the same in postman, it returns Json
9. Try to debug Magento
10. Check if it is Cors. Vimeo Cors is declared in Csp policy
### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1.Insert url in Product Video and it is found
2. Video displays in fotorama
### Actual result (*)
1. Video not found. Unable to display Vimeo on Fotorama
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.3/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [X ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
| 1.0 | Vimeo Product Video Not Found - <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Magento 2.4.0 and Magento 2.4.1
-->
1. It was previoulys working, tested in Magento 2.4.0.
### Steps to reproduce (*)
1. So upgrade to 2.4.1. Do not upgrade and remain in 2.4.0 in another cloned enviroment
2. Go to product page
3. Insert a product video in product page using the regular insert video button. It previously worked.
4. Few days later try to do it again. Vimeo returns 404
5. Try to upload to another Vimeo account to check if a setting may be blocking. No, it returns 404
6. curl -i https://vimeo.com/videoId. It returns 200 and a response with json and ajax call
7. Try to get in browser Vimeo dowload a file a does not print information on client
8. Try the same in postman, it returns Json
9. Try to debug Magento
10. Check if it is Cors. Vimeo Cors is declared in Csp policy
### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1.Insert url in Product Video and it is found
2. Video displays in fotorama
### Actual result (*)
1. Video not found. Unable to display Vimeo on Fotorama
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.3/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [X ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
| non_process | vimeo product video not found please review our guidelines before adding a new issue fields marked with are required please don t remove the template preconditions magento and magento it was previoulys working tested in magento steps to reproduce so upgrade to do not upgrade and remain in in another cloned enviroment go to product page insert a product video in product page using the regular insert video button it previously worked few days later try to do it again vimeo returns try to upload to another vimeo account to check if a setting may be blocking no it returns curl i it returns and a response with json and ajax call try to get in browser vimeo dowload a file a does not print information on client try the same in postman it returns json try to debug magento check if it is cors vimeo cors is declared in csp policy expected result insert url in product video and it is found video displays in fotorama actual result video not found unable to display vimeo on fotorama please provide assessment for the issue as reporter this information will help during confirmation and issue triage processes severity affects critical data or functionality and leaves users without workaround severity affects critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and does not force users to employ a workaround severity affects aesthetics professional look and feel “quality” or “usability” | 0 |
10,706 | 3,135,051,027 | IssuesEvent | 2015-09-10 13:39:05 | handsontable/handsontable | https://api.github.com/repos/handsontable/handsontable | closed | Dropdown doesn't allow more than 33 items | Bug Cell type: autocomplete / dropdown / handsontable Priority: high Released Tested | See example below. There are more than 60 elements in array and yet you can only see 33 of them in dropdown
http://jsfiddle.net/haqog901/1/ | 1.0 | Dropdown doesn't allow more than 33 items - See example below. There are more than 60 elements in array and yet you can only see 33 of them in dropdown
http://jsfiddle.net/haqog901/1/ | non_process | dropdown doesn t allow more than items see example below there are more than elements in array and yet you can only see of them in dropdown | 0 |
139,146 | 20,782,380,391 | IssuesEvent | 2022-03-16 15:48:22 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | Public Websites: Sprint 39 Priorities | vsa-public-websites frontend vsa planned-work Design -> Frontend | ## Priorities
1. Build out CLP FE Panel tickets - Panel 1 [#17636] & Panel 2 Epics [#17642]
2. Finalizing R&S functionality (post code freeze) - Remaining tickets from Sprint 38
3. Outstanding tickets from Sprint 38 not already included in the priorities listed above
4. Intake Requests for January, along with some applicable Tech Debt (508/a11y issues)
5. Transformer followup and close out of #16440 (Pending Platform comparison tool)
6. Follow up with CLP Content Nodes for FE.
### Campaign Landing Page MVP 1.0
**FE Transformer**
- [x] [FE] Create CLP Template + graphql query [Transformers] #17514
- [ ] [FE] Create transformer/output/input for the CLP [Transformers] #17513
- [x] [Design/PM] Work with a11y to review mural design. 508/a11y Review #18756
**FE Panel 1 [#17636]**
- [x] [FE] CLP Hero Banner Node Title, Header and Blurb H3 #17638
- [x] [FE] CLP Field Images (Hero, Field Image, Field Image Alt) #17640
- [x] [FE] CLP Primary CTA (Label, Link) #17641
**FE Panel 2 Epics [#17642]**
- [x] [FE] CLP Field Title and Blurb H2 “Why this matters" #17644
- [x] [FE] CLP "Why this matters" Field Secondary CTA (Label, Link) #17645
### Resources and Support
Priority R&S Breakdown: WIP [MVP 1.1 EPIC #15588] [Iterate EPIC #15584]
**FE**
- [x] [FE] PDF/Print Save Checklist - mobile and desktop - R&S 1.1 #18337
- [x] [FE] Browse by audience RS homepage FE update (WIP - Nick) #17276 `(BLOCKED)`
- [x] [FE] Auto-expand accordion items if the URL hash matches the item ID #15798
### Transformers
- [x] [PM] Transformer followup and close out of #16440
- [x] [PM] Determine ETA on Platform CMS Transformer Comparison Tool
### Cerner Support
Main objective: Acquire facility list to determine what facilities need to be set accordingly.
### Intake Request - Techical Debt
**FE**
- [x] [FE/Content] Analytics Implementation for COVID FAQ Accordion Embedded Links #13727 (WIP - Nick)
### Kanban Board
_Please select a ticket and include [WIP - [NAME]] next to the ticket that is being worked._ Pick up tickets by priority on the `To Do ` section to begin and move ticket through kanban workflow until `Done`. Please include resolution steps on tickets along with instructions if for some reason you are unable to complete the ticket during this sprint.
Please tag @brianalloyd on tickets when moved to **Done** section for validation and closeout. Please reach out if you have any questions or concerns in the interim. Welcome to **PW team** (for this sprint) we're excited to have the help.
<details>
<summary> TO DO </summary>
</details>
<details>
<summary> ON-GOING </summary>
- [ ] [FE] Enhancement: RS mobile views, left margin is too smooshed to the edge and cutting off text #17470 (WIP Sandra)
</details>
<details>
<summary> DONE </summary>
- [x] [FE] Link TTY: 711 number in footer and right rail of hub landing pages #18151 (Sandra) @brianalloyd
- [x] [FE] [COGNITION]: Footer links are inconsistently styled and missing interactive cues #3179 (Sandra) @brianalloyd
- [x] [FE] Main nav's "Contact Us" button can wrap when logged-in user has a long name #18412 (Sandra) @brianalloyd
- [x] [FE] [COGNITION]: Multiple ambiguities with linked text in content block #3556 (Sandra) @brianalloyd
- [x] [FE] Use redesigned "In this section" button for triggering sidebar on mobile devices across benefit pages #13325 (WIP Erik)
- [x] [FE] VA.gov mobile nav bugs #8256 (WIP Erik)
| 1.0 | Public Websites: Sprint 39 Priorities - ## Priorities
1. Build out CLP FE Panel tickets - Panel 1 [#17636] & Panel 2 Epics [#17642]
2. Finalizing R&S functionality (post code freeze) - Remaining tickets from Sprint 38
3. Outstanding tickets from Sprint 38 not already included in the priorities listed above
4. Intake Requests for January, along with some applicable Tech Debt (508/a11y issues)
5. Transformer followup and close out of #16440 (Pending Platform comparison tool)
6. Follow up with CLP Content Nodes for FE.
### Campaign Landing Page MVP 1.0
**FE Transformer**
- [x] [FE] Create CLP Template + graphql query [Transformers] #17514
- [ ] [FE] Create transformer/output/input for the CLP [Transformers] #17513
- [x] [Design/PM] Work with a11y to review mural design. 508/a11y Review #18756
**FE Panel 1 [#17636]**
- [x] [FE] CLP Hero Banner Node Title, Header and Blurb H3 #17638
- [x] [FE] CLP Field Images (Hero, Field Image, Field Image Alt) #17640
- [x] [FE] CLP Primary CTA (Label, Link) #17641
**FE Panel 2 Epics [#17642]**
- [x] [FE] CLP Field Title and Blurb H2 “Why this matters" #17644
- [x] [FE] CLP "Why this matters" Field Secondary CTA (Label, Link) #17645
### Resources and Support
Priority R&S Breakdown: WIP [MVP 1.1 EPIC #15588] [Iterate EPIC #15584]
**FE**
- [x] [FE] PDF/Print Save Checklist - mobile and desktop - R&S 1.1 #18337
- [x] [FE] Browse by audience RS homepage FE update (WIP - Nick) #17276 `(BLOCKED)`
- [x] [FE] Auto-expand accordion items if the URL hash matches the item ID #15798
### Transformers
- [x] [PM] Transformer followup and close out of #16440
- [x] [PM] Determine ETA on Platform CMS Transformer Comparison Tool
### Cerner Support
Main objective: Acquire facility list to determine what facilities need to be set accordingly.
### Intake Request - Techical Debt
**FE**
- [x] [FE/Content] Analytics Implementation for COVID FAQ Accordion Embedded Links #13727 (WIP - Nick)
### Kanban Board
_Please select a ticket and include [WIP - [NAME]] next to the ticket that is being worked._ Pick up tickets by priority on the `To Do ` section to begin and move ticket through kanban workflow until `Done`. Please include resolution steps on tickets along with instructions if for some reason you are unable to complete the ticket during this sprint.
Please tag @brianalloyd on tickets when moved to **Done** section for validation and closeout. Please reach out if you have any questions or concerns in the interim. Welcome to **PW team** (for this sprint) we're excited to have the help.
<details>
<summary> TO DO </summary>
</details>
<details>
<summary> ON-GOING </summary>
- [ ] [FE] Enhancement: RS mobile views, left margin is too smooshed to the edge and cutting off text #17470 (WIP Sandra)
</details>
<details>
<summary> DONE </summary>
- [x] [FE] Link TTY: 711 number in footer and right rail of hub landing pages #18151 (Sandra) @brianalloyd
- [x] [FE] [COGNITION]: Footer links are inconsistently styled and missing interactive cues #3179 (Sandra) @brianalloyd
- [x] [FE] Main nav's "Contact Us" button can wrap when logged-in user has a long name #18412 (Sandra) @brianalloyd
- [x] [FE] [COGNITION]: Multiple ambiguities with linked text in content block #3556 (Sandra) @brianalloyd
- [x] [FE] Use redesigned "In this section" button for triggering sidebar on mobile devices across benefit pages #13325 (WIP Erik)
- [x] [FE] VA.gov mobile nav bugs #8256 (WIP Erik)
| non_process | public websites sprint priorities priorities build out clp fe panel tickets panel panel epics finalizing r s functionality post code freeze remaining tickets from sprint outstanding tickets from sprint not already included in the priorities listed above intake requests for january along with some applicable tech debt issues transformer followup and close out of pending platform comparison tool follow up with clp content nodes for fe campaign landing page mvp fe transformer create clp template graphql query create transformer output input for the clp work with to review mural design review fe panel clp hero banner node title header and blurb clp field images hero field image field image alt clp primary cta label link fe panel epics clp field title and blurb “why this matters clp why this matters field secondary cta label link resources and support priority r s breakdown wip fe pdf print save checklist mobile and desktop r s browse by audience rs homepage fe update wip nick blocked auto expand accordion items if the url hash matches the item id transformers transformer followup and close out of determine eta on platform cms transformer comparison tool cerner support main objective acquire facility list to determine what facilities need to be set accordingly intake request techical debt fe analytics implementation for covid faq accordion embedded links wip nick kanban board please select a ticket and include next to the ticket that is being worked pick up tickets by priority on the to do section to begin and move ticket through kanban workflow until done please include resolution steps on tickets along with instructions if for some reason you are unable to complete the ticket during this sprint please tag brianalloyd on tickets when moved to done section for validation and closeout please reach out if you have any questions or concerns in the interim welcome to pw team for this sprint we re excited to have the help to do on going enhancement rs mobile views left margin is too smooshed to the edge and cutting off text wip sandra done link tty number in footer and right rail of hub landing pages sandra brianalloyd footer links are inconsistently styled and missing interactive cues sandra brianalloyd main nav s contact us button can wrap when logged in user has a long name sandra brianalloyd multiple ambiguities with linked text in content block sandra brianalloyd use redesigned in this section button for triggering sidebar on mobile devices across benefit pages wip erik va gov mobile nav bugs wip erik | 0 |
10,114 | 13,044,162,215 | IssuesEvent | 2020-07-29 03:47:30 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `NowWithArg` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `NowWithArg` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `NowWithArg` from TiDB -
## Description
Port the scalar function `NowWithArg` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function nowwitharg from tidb description port the scalar function nowwitharg from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
110,249 | 4,424,069,904 | IssuesEvent | 2016-08-16 11:01:09 | inf3rno/o3 | https://api.github.com/repos/inf3rno/o3 | opened | Extending native classes - is Class(native, config) really the best solution? | priority:normal undetermined | ```js
var Class = function () {
var options = Object.create({
Source: Object,
config: {},
buildArgs: []
});
function checkOption(option) {
var key = "config";
if (option instanceof Function)
key = "Source";
else if (option instanceof Array)
key = "buildArgs";
else if (option instanceof Object)
key = "config";
else
throw new Error("Invalid configuration option.");
if (options.hasOwnProperty(key))
throw new Error("Duplicated configuration option: " + key + ".");
options[key] = option;
}
for (var index = 0, length = arguments.length; index < length; ++index)
checkOption(arguments[index]);
var Source = options.Source,
config = options.config,
buildArgs = options.buildArgs;
return (Source.extend || Class.extend).call(Source, config, buildArgs);
};
```
Currently we use `Class(Ancestor, properties)` to do `Ancestor.extend(properties)` because native classes like Error or Object do not have these `Class` related methods. Wouldn't it be better to use a wrapper like `o3(Object).extend(properties)`? Does the Class really have these features? What about using `new Class(propeties)` to imitate `Class.extend()`? What about not having Class() at all, just a noop instead of that? | 1.0 | Extending native classes - is Class(native, config) really the best solution? - ```js
var Class = function () {
var options = Object.create({
Source: Object,
config: {},
buildArgs: []
});
function checkOption(option) {
var key = "config";
if (option instanceof Function)
key = "Source";
else if (option instanceof Array)
key = "buildArgs";
else if (option instanceof Object)
key = "config";
else
throw new Error("Invalid configuration option.");
if (options.hasOwnProperty(key))
throw new Error("Duplicated configuration option: " + key + ".");
options[key] = option;
}
for (var index = 0, length = arguments.length; index < length; ++index)
checkOption(arguments[index]);
var Source = options.Source,
config = options.config,
buildArgs = options.buildArgs;
return (Source.extend || Class.extend).call(Source, config, buildArgs);
};
```
Currently we use `Class(Ancestor, properties)` to do `Ancestor.extend(properties)` because native classes like Error or Object do not have these `Class` related methods. Wouldn't it be better to use a wrapper like `o3(Object).extend(properties)`? Does the Class really have these features? What about using `new Class(propeties)` to imitate `Class.extend()`? What about not having Class() at all, just a noop instead of that? | non_process | extending native classes is class native config really the best solution js var class function var options object create source object config buildargs function checkoption option var key config if option instanceof function key source else if option instanceof array key buildargs else if option instanceof object key config else throw new error invalid configuration option if options hasownproperty key throw new error duplicated configuration option key options option for var index length arguments length index length index checkoption arguments var source options source config options config buildargs options buildargs return source extend class extend call source config buildargs currently we use class ancestor properties to do ancestor extend properties because native classes like error or object do not have these class related methods wouldn t it be better to use a wrapper like object extend properties does the class really have these features what about using new class propeties to imitate class extend what about not having class at all just a noop instead of that | 0 |
226,449 | 7,519,441,417 | IssuesEvent | 2018-04-12 11:37:51 | 0-complexity/openvcloud | https://api.github.com/repos/0-complexity/openvcloud | closed | disabled image appear in end user portal | priority_minor state_verification type_feature | #### Detailed description
When we disable image, it's still appear in end user portal,
user can't create image from it and will erais no available resources to lunch this vm
we need it to disabled from end user too, and he can't see it
#### Steps to reproduce
#### Relevant stacktraces
| Please make sure to include the stacktrace found under /grid/error conditions in the portal.
Adding a link alone is not sufficient as it might have expired by the time the issue is looked at.
#### Installation information
- environment version:
- operating system:
- portal verison
- ovc version
| 1.0 | disabled image appear in end user portal - #### Detailed description
When we disable image, it's still appear in end user portal,
user can't create image from it and will erais no available resources to lunch this vm
we need it to disabled from end user too, and he can't see it
#### Steps to reproduce
#### Relevant stacktraces
| Please make sure to include the stacktrace found under /grid/error conditions in the portal.
Adding a link alone is not sufficient as it might have expired by the time the issue is looked at.
#### Installation information
- environment version:
- operating system:
- portal verison
- ovc version
| non_process | disabled image appear in end user portal detailed description when we disable image it s still appear in end user portal user can t create image from it and will erais no available resources to lunch this vm we need it to disabled from end user too and he can t see it steps to reproduce relevant stacktraces please make sure to include the stacktrace found under grid error conditions in the portal adding a link alone is not sufficient as it might have expired by the time the issue is looked at installation information environment version operating system portal verison ovc version | 0 |
17,232 | 22,946,820,227 | IssuesEvent | 2022-07-19 01:24:26 | SigNoz/signoz | https://api.github.com/repos/SigNoz/signoz | closed | Enhance external calls inference logic | enhancement application-metrics-page signozspanmetricsprocessor | It should consider rpc calls. It should look at the url parts (`http.host`, `host.port`, `http.scheme`) if the attribute `http.url` doesn't exist. There are not so widely used things such as Faas which still fall under the category.
- [X] #1305
- [X] FasS
| 1.0 | Enhance external calls inference logic - It should consider rpc calls. It should look at the url parts (`http.host`, `host.port`, `http.scheme`) if the attribute `http.url` doesn't exist. There are not so widely used things such as Faas which still fall under the category.
- [X] #1305
- [X] FasS
| process | enhance external calls inference logic it should consider rpc calls it should look at the url parts http host host port http scheme if the attribute http url doesn t exist there are not so widely used things such as faas which still fall under the category fass | 1 |
97,055 | 8,641,843,847 | IssuesEvent | 2018-11-24 22:19:47 | jpbarraca/pai | https://api.github.com/repos/jpbarraca/pai | closed | Error when using arming in stay mode | bug mg testing required | When I arm the system in stay mode, I get the following error in the console:
Currently I am still testing and familiarizing myself with everything. So it might very possibly be something I am doing wrong. It seem to still work, as the alarm is armed in stay mode.
I would publish either arm_stay or STAY_ARM (depending if I have MQTT_HOMEBRIDGE_ENABLE as True or False).
>topic=paradox/control/partitions/Area_1, payload=arm_stay
>topic=paradox/control/partitions/Area_1, payload=STAY_ARM
>topic=paradox/control/partitions/all, payload=arm_stay
>topic=paradox/control/partitions/all, payload=STAY_ARM
I experience the same when using the existing remote for the alarm.
**Details**:
Panel MG5050 version 5.33 build 2
IP150 Module (downgraded to <4 firmware)
**Current user.conf**:
>LOGGING_LEVEL_CONSOLE = logging.DEBUG
>CONNECTION_TYPE = 'IP'
>IP_CONNECTION_HOST = '1.1.1.1'
>IP_CONNECTION_PASSWORD = b'my_password'
>MQTT_ENABLE = True
>MQTT_HOST = '1.1.1.2'
>MQTT_USERNAME = None
>MQTT_PASSWORD = None
>PASSWORD = None
>MQTT_HOMEBRIDGE_ENABLE = True _**(or False)**_
Test, disam the system, wait a few seconds and arm it again. Below is the console output with DEBUG switched on.
**Console output**:
REDACTED | 1.0 | Error when using arming in stay mode - When I arm the system in stay mode, I get the following error in the console:
Currently I am still testing and familiarizing myself with everything. So it might very possibly be something I am doing wrong. It seem to still work, as the alarm is armed in stay mode.
I would publish either arm_stay or STAY_ARM (depending if I have MQTT_HOMEBRIDGE_ENABLE as True or False).
>topic=paradox/control/partitions/Area_1, payload=arm_stay
>topic=paradox/control/partitions/Area_1, payload=STAY_ARM
>topic=paradox/control/partitions/all, payload=arm_stay
>topic=paradox/control/partitions/all, payload=STAY_ARM
I experience the same when using the existing remote for the alarm.
**Details**:
Panel MG5050 version 5.33 build 2
IP150 Module (downgraded to <4 firmware)
**Current user.conf**:
>LOGGING_LEVEL_CONSOLE = logging.DEBUG
>CONNECTION_TYPE = 'IP'
>IP_CONNECTION_HOST = '1.1.1.1'
>IP_CONNECTION_PASSWORD = b'my_password'
>MQTT_ENABLE = True
>MQTT_HOST = '1.1.1.2'
>MQTT_USERNAME = None
>MQTT_PASSWORD = None
>PASSWORD = None
>MQTT_HOMEBRIDGE_ENABLE = True _**(or False)**_
Test, disam the system, wait a few seconds and arm it again. Below is the console output with DEBUG switched on.
**Console output**:
REDACTED | non_process | error when using arming in stay mode when i arm the system in stay mode i get the following error in the console currently i am still testing and familiarizing myself with everything so it might very possibly be something i am doing wrong it seem to still work as the alarm is armed in stay mode i would publish either arm stay or stay arm depending if i have mqtt homebridge enable as true or false topic paradox control partitions area payload arm stay topic paradox control partitions area payload stay arm topic paradox control partitions all payload arm stay topic paradox control partitions all payload stay arm i experience the same when using the existing remote for the alarm details panel version build module downgraded to firmware current user conf logging level console logging debug connection type ip ip connection host ip connection password b my password mqtt enable true mqtt host mqtt username none mqtt password none password none mqtt homebridge enable true or false test disam the system wait a few seconds and arm it again below is the console output with debug switched on console output redacted | 0 |
16,400 | 21,183,180,057 | IssuesEvent | 2022-04-08 09:57:16 | googleapis/google-api-dotnet-client | https://api.github.com/repos/googleapis/google-api-dotnet-client | opened | Check whether there are conflicts with an SA using self signed JWTs and retyring token refreshes for Apiary | type: process priority: p3 | This is mostly for me to remember to look at this.
If the conflicts exist:
- They wouldn't cause any errors, the side effect is that on ocassion a token is fetched from the server when a self signed JWT could be used.
- It only manifests on Apiary libraries.
- It wouldn't manifest by default as a standalone credential does not uses self signed JWTs by default. | 1.0 | Check whether there are conflicts with an SA using self signed JWTs and retyring token refreshes for Apiary - This is mostly for me to remember to look at this.
If the conflicts exist:
- They wouldn't cause any errors, the side effect is that on ocassion a token is fetched from the server when a self signed JWT could be used.
- It only manifests on Apiary libraries.
- It wouldn't manifest by default as a standalone credential does not uses self signed JWTs by default. | process | check whether there are conflicts with an sa using self signed jwts and retyring token refreshes for apiary this is mostly for me to remember to look at this if the conflicts exist they wouldn t cause any errors the side effect is that on ocassion a token is fetched from the server when a self signed jwt could be used it only manifests on apiary libraries it wouldn t manifest by default as a standalone credential does not uses self signed jwts by default | 1 |
320 | 2,766,189,670 | IssuesEvent | 2015-04-30 02:14:46 | Hyesu/capstone_carcar5talk | https://api.github.com/repos/Hyesu/capstone_carcar5talk | closed | The Second Order | Emergency Overall process | SD card(micro) for OS, file system : 5
Battery for providing power to RPi : 5
RC car for Demo : 5 | 1.0 | The Second Order - SD card(micro) for OS, file system : 5
Battery for providing power to RPi : 5
RC car for Demo : 5 | process | the second order sd card micro for os file system battery for providing power to rpi rc car for demo | 1 |
12,371 | 14,895,944,311 | IssuesEvent | 2021-01-21 09:46:06 | prisma/prisma-engines | https://api.github.com/repos/prisma/prisma-engines | closed | Migration Engine: Native Types | engines/migration engine process/candidate team/migrations tech/engines | Check the related [epic](https://github.com/prisma/prisma-engine/issues/30) for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment. | 1.0 | Migration Engine: Native Types - Check the related [epic](https://github.com/prisma/prisma-engine/issues/30) for an initial breakdown of work packages.
Please use supply a more detailed task list in a comment. | process | migration engine native types check the related for an initial breakdown of work packages please use supply a more detailed task list in a comment | 1 |
22,399 | 31,142,289,636 | IssuesEvent | 2023-08-16 01:44:32 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | Flaky test: expected undefined to equal 'bar' | OS: linux process: flaky test topic: flake ❄️ stage: flake stale | ### Link to dashboard or CircleCI failure
https://dashboard.cypress.io/projects/ypt4pf/runs/37593/test-results/b55b7269-1cf9-4fd1-9e12-3dde8964ece3
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/navigation.cy.js#L799
### Analysis
<img width="426" alt="Screen Shot 2022-08-08 at 2 45 18 PM" src="https://user-images.githubusercontent.com/26726429/183520119-55ce8113-215d-49b5-95ee-2e7b515b383c.png">
### Cypress Version
10.4.0
### Other
There's a potentially related GitHub issue, see comment in code. Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed | 1.0 | Flaky test: expected undefined to equal 'bar' - ### Link to dashboard or CircleCI failure
https://dashboard.cypress.io/projects/ypt4pf/runs/37593/test-results/b55b7269-1cf9-4fd1-9e12-3dde8964ece3
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/navigation.cy.js#L799
### Analysis
<img width="426" alt="Screen Shot 2022-08-08 at 2 45 18 PM" src="https://user-images.githubusercontent.com/26726429/183520119-55ce8113-215d-49b5-95ee-2e7b515b383c.png">
### Cypress Version
10.4.0
### Other
There's a potentially related GitHub issue, see comment in code. Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed | process | flaky test expected undefined to equal bar link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src cypress version other there s a potentially related github issue see comment in code search for this issue number in the codebase to find the test s skipped until this issue is fixed | 1 |
11,243 | 13,222,994,104 | IssuesEvent | 2020-08-17 16:26:20 | gudmdharalds-a8c/testing123 | https://api.github.com/repos/gudmdharalds-a8c/testing123 | closed | PHP Upgrade: Compatibility issues found in bla-11.php | PHP 7.4 Compatibility PHP Compatibility | The following issues were found when scanning for PHP compatibility issues in preparation for upgrade to PHP version 7.4: <br>
* <b>Error</b>: Extension 'mysql_' is deprecated since PHP 5.5 and removed since PHP 7.0; Use mysqli instead https://github.com/gudmdharalds-a8c/testing123/blob/b99a028e21f490f459f7095329fe4933e8643e79/bla-11.php#L3
<br> Note that this is an automated report. We recommend that the issues noted here are looked into, as it will make the transition to the new PHP version easier.
| True | PHP Upgrade: Compatibility issues found in bla-11.php - The following issues were found when scanning for PHP compatibility issues in preparation for upgrade to PHP version 7.4: <br>
* <b>Error</b>: Extension 'mysql_' is deprecated since PHP 5.5 and removed since PHP 7.0; Use mysqli instead https://github.com/gudmdharalds-a8c/testing123/blob/b99a028e21f490f459f7095329fe4933e8643e79/bla-11.php#L3
<br> Note that this is an automated report. We recommend that the issues noted here are looked into, as it will make the transition to the new PHP version easier.
| non_process | php upgrade compatibility issues found in bla php the following issues were found when scanning for php compatibility issues in preparation for upgrade to php version error extension mysql is deprecated since php and removed since php use mysqli instead note that this is an automated report we recommend that the issues noted here are looked into as it will make the transition to the new php version easier | 0 |
3,223 | 2,750,749,251 | IssuesEvent | 2015-04-24 02:01:14 | stealjs/steal | https://api.github.com/repos/stealjs/steal | closed | link to release article | documentation | Include a link to http://blog.bitovi.com/introducing-stealjs/ somewhere on the homepage or the nav. That article has a lot more "why stealJS" information than just the videos alone.
Maybe just make "Why StealJS" a link to that article. | 1.0 | link to release article - Include a link to http://blog.bitovi.com/introducing-stealjs/ somewhere on the homepage or the nav. That article has a lot more "why stealJS" information than just the videos alone.
Maybe just make "Why StealJS" a link to that article. | non_process | link to release article include a link to somewhere on the homepage or the nav that article has a lot more why stealjs information than just the videos alone maybe just make why stealjs a link to that article | 0 |
34,826 | 12,301,441,070 | IssuesEvent | 2020-05-11 15:25:02 | JeffThorslund/Ottawa-River-Paddling-Guide | https://api.github.com/repos/JeffThorslund/Ottawa-River-Paddling-Guide | closed | Use Environment Variables for Sensitive Information | security risk | Email addresses and passwords should be replaced with environment variables. | True | Use Environment Variables for Sensitive Information - Email addresses and passwords should be replaced with environment variables. | non_process | use environment variables for sensitive information email addresses and passwords should be replaced with environment variables | 0 |
416 | 2,852,404,639 | IssuesEvent | 2015-06-01 13:30:01 | genomizer/genomizer-server | https://api.github.com/repos/genomizer/genomizer-server | closed | Restructure Raw to profile pipeline | Processing | Reading the clients wishes regarding a new pipeline, I've (sort of, if I've understood correctly) parsed these steps. Two new tools are needed, picard (java) and pyicos (python), both are at least downloaded and callable at the moment (although without proper parameters).
--Raw to Profile--
1. Upload .fastq raw data
2. Check .fastq scoring scheme, check_scoring-scheme.pl data.fastq
3. Align .fastq reads with bowtie2, using scheme from previous step and genome release (only one file needed, ratio calc. needs two).
4. Throw away the .fastq file.
5. Results in .sam
6. Sort .sam file with java -jar SortSam I=<input_file> O=<sorted_file> SO=coordinate
7. Produces .sam file
8. Remove duplicates with java -jar MarkDuplicates INPUT=<sorted_file> OUTPUT=<marked_file> METRICS_FILE=metrics.txt REMOVE_DUPLICATES=true
9. Now has metrics file and marked .sam file
10. Estimate strand cross-correlation with pyicos strcorr, pyicos strcorr <marked>.sam test.pk -f sam -F bed_pk
11. Generate .wig with extended reads, pyicos convert <marked>.sam <extended_marked>.wig -f sam -F bed_wig -O -x 303
12. Profile data in .wig format produced, processing done.
Smoothing, step and ration calculation is to be done separately. It should be a choice if .sam files are saved or not during the process. | 1.0 | Restructure Raw to profile pipeline - Reading the clients wishes regarding a new pipeline, I've (sort of, if I've understood correctly) parsed these steps. Two new tools are needed, picard (java) and pyicos (python), both are at least downloaded and callable at the moment (although without proper parameters).
--Raw to Profile--
1. Upload .fastq raw data
2. Check .fastq scoring scheme, check_scoring-scheme.pl data.fastq
3. Align .fastq reads with bowtie2, using scheme from previous step and genome release (only one file needed, ratio calc. needs two).
4. Throw away the .fastq file.
5. Results in .sam
6. Sort .sam file with java -jar SortSam I=<input_file> O=<sorted_file> SO=coordinate
7. Produces .sam file
8. Remove duplicates with java -jar MarkDuplicates INPUT=<sorted_file> OUTPUT=<marked_file> METRICS_FILE=metrics.txt REMOVE_DUPLICATES=true
9. Now has metrics file and marked .sam file
10. Estimate strand cross-correlation with pyicos strcorr, pyicos strcorr <marked>.sam test.pk -f sam -F bed_pk
11. Generate .wig with extended reads, pyicos convert <marked>.sam <extended_marked>.wig -f sam -F bed_wig -O -x 303
12. Profile data in .wig format produced, processing done.
Smoothing, step and ration calculation is to be done separately. It should be a choice if .sam files are saved or not during the process. | process | restructure raw to profile pipeline reading the clients wishes regarding a new pipeline i ve sort of if i ve understood correctly parsed these steps two new tools are needed picard java and pyicos python both are at least downloaded and callable at the moment although without proper parameters raw to profile upload fastq raw data check fastq scoring scheme check scoring scheme pl data fastq align fastq reads with using scheme from previous step and genome release only one file needed ratio calc needs two throw away the fastq file results in sam sort sam file with java jar sortsam i o so coordinate produces sam file remove duplicates with java jar markduplicates input output metrics file metrics txt remove duplicates true now has metrics file and marked sam file estimate strand cross correlation with pyicos strcorr pyicos strcorr sam test pk f sam f bed pk generate wig with extended reads pyicos convert sam wig f sam f bed wig o x profile data in wig format produced processing done smoothing step and ration calculation is to be done separately it should be a choice if sam files are saved or not during the process | 1 |
5,708 | 2,964,479,444 | IssuesEvent | 2015-07-10 16:52:32 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | Documentation: Installer's manual has incorrect info on Weld and should probably be made more obvious given the seriousness of the issue. | Component: Documentation Priority: Critical Status: QA |
The installer's guide references v2.2.4 of WELD but to fix the issue I believe we need v2.2.10 as well as the lazy load setting.
http://guides.dataverse.org/en/latest/installation/prerequisites.html?highlight=weld | 1.0 | Documentation: Installer's manual has incorrect info on Weld and should probably be made more obvious given the seriousness of the issue. -
The installer's guide references v2.2.4 of WELD but to fix the issue I believe we need v2.2.10 as well as the lazy load setting.
http://guides.dataverse.org/en/latest/installation/prerequisites.html?highlight=weld | non_process | documentation installer s manual has incorrect info on weld and should probably be made more obvious given the seriousness of the issue the installer s guide references of weld but to fix the issue i believe we need as well as the lazy load setting | 0 |
4,715 | 3,881,386,713 | IssuesEvent | 2016-04-13 04:01:00 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 21149192: iTunes 12.1.2: Non-Retina album cover in album expansion view | classification:ui/usability reproducible:always status:open | #### Description
Summary:
When you single-click an album and it expands down to show all the tracks, the album cover on the right side is not retina-quality, even when the album artwork is available at that quality.
Steps to Reproduce:
1. Download a newer album from iTunes that has good artwork, such as the soundtrack to Gladiator.
2. Click to expand it.
3. Notice the quality of the album cover on the right side is not retina.
4. Right-click on the album and load the Get Info window.
5. Notice the artwork shown there IS retina quality.
Notes:
The attached screenshot shows the Get Info window on the left, and the right-side album artwork that has sub-par quality.
-
Product Version: iTunes 12.1.2 (12.1.2.27)
Created: 2015-05-28T22:24:58.993630
Originated: 2015-05-28T15:24:00
Open Radar Link: http://www.openradar.me/21149192 | True | 21149192: iTunes 12.1.2: Non-Retina album cover in album expansion view - #### Description
Summary:
When you single-click an album and it expands down to show all the tracks, the album cover on the right side is not retina-quality, even when the album artwork is available at that quality.
Steps to Reproduce:
1. Download a newer album from iTunes that has good artwork, such as the soundtrack to Gladiator.
2. Click to expand it.
3. Notice the quality of the album cover on the right side is not retina.
4. Right-click on the album and load the Get Info window.
5. Notice the artwork shown there IS retina quality.
Notes:
The attached screenshot shows the Get Info window on the left, and the right-side album artwork that has sub-par quality.
-
Product Version: iTunes 12.1.2 (12.1.2.27)
Created: 2015-05-28T22:24:58.993630
Originated: 2015-05-28T15:24:00
Open Radar Link: http://www.openradar.me/21149192 | non_process | itunes non retina album cover in album expansion view description summary when you single click an album and it expands down to show all the tracks the album cover on the right side is not retina quality even when the album artwork is available at that quality steps to reproduce download a newer album from itunes that has good artwork such as the soundtrack to gladiator click to expand it notice the quality of the album cover on the right side is not retina right click on the album and load the get info window notice the artwork shown there is retina quality notes the attached screenshot shows the get info window on the left and the right side album artwork that has sub par quality product version itunes created originated open radar link | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.